linux cpu占用率如何看
256
2022-10-23
【Containerd版】Kubeadm高可用安装K8s集群1.23+
点我查看安装视频
@TOC
基本环境配置
成为K8s架构师只需一步,点我了解
节点规划
主机名 | IP地址 | 说明 |
| k8s-master01 03 | 10.0.0.201 203 | master节点 * 3 |
| k8s-master-lb | 10.0.0.236 | keepalived虚拟IP || k8s
8s-node01 02 | 10.0.0.204 205 | worker节点 * 2 |
网段规划及软件版本
配置信息 | 备注 |
系统版本 | CentOS 7.9 |
Docker版本 | 20.10.x |
| Pod网段 | 172.16.0.0/12 | | Service网段 | 192.168.0.0/16 |
基本配置
所有节点配置hosts,修改/etc/hosts如下:
10.0.0.201 k8s-master0110.0.0.202 k8s-master0210.0.0.203 k8s-master0310.0.0.236 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP10.0.0.204 k8s-node0110.0.0.205 k8s-node02
yum源配置:
curl -o /etc/yum.repos.d/CentOS-Base.repo install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo <
必备工具安装:
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
拥抱云原生,高薪就业点我了解所有节点关闭防火墙、selinux、dnsmasq、swap:
systemctl disable --now firewalld systemctl disable --now dnsmasqsystemctl disable --now NetworkManagersetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinuxsed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/configswapoff -a && sysctl -w vm.swappiness=0sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
时间同步:
rpm -ivh install ntpdate -yln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtimeecho 'Asia/Shanghai' >/etc/timezonentpdate time2.aliyun.com# 加入到crontab*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
所有节点配置limit:
ulimit -SHn 65535vim /etc/security/limits.conf# 末尾添加如下内容* soft nofile 65536* hard nofile 131072* soft nproc 65535* hard nproc 655350* soft memlock unlimited* hard memlock unlimited
免密钥配置:
ssh-keygen -t rsafor i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
下载安装所有的源码文件:
cd /root/ ; git clone update -y --exclude=kernel* && reboot
内核升级配置
下载内核:
cd /rootwget i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
所有节点安装内核:
cd /root && yum localinstall -y kernel-ml*# 所有节点更改内核启动顺序grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfggrubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"# 检查默认内核是不是4.19[root@k8s-master02 ]# grubby --default-kernel/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64# 所有节点重启,然后检查内核是不是4.19[root@k8s-master02 ]# uname -aLinux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
所有节点安装ipvsadm:
yum install ipvsadm ipset sysstat conntrack libseccomp -y
所有节点配置ipvs模块:
modprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrackvim /etc/modules-load.d/ipvs.conf # 加入以下内容ip_vsip_vs_lcip_vs_wlcip_vs_rrip_vs_wrrip_vs_lblcip_vs_lblcrip_vs_dhip_vs_ship_vs_foip_vs_nqip_vs_sedip_vs_ftpip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipip# 设置开机自动加载systemctl enable --now systemd-modules-load.service
所有节点配置k8s内核:
cat <
重启服务器后,测试配置是否还在加载:
rebootlsmod | grep --color=auto -e ip_vs -e nf_conntrack
高薪K8s全栈架构师视频点我了解,提供完善并且免费的售后服务、免费更新、免费技术问答、直击年薪30万!
K8s组件及Runtime安装
Containerd安装
所有节点安装docker-ce-20.10:
yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y
配置Containerd所需的模块(所有节点):
# cat < 所有节点加载模块: # modprobe -- overlay# modprobe -- br_netfilter 所有节点,配置Containerd所需的内核: cat < 所有节点加载内核: # sysctl --system 所有节点配置Containerd的配置文件: # mkdir -p /etc/containerd# containerd config default | tee /etc/containerd/config.toml 所有节点将Containerd的Cgroup改为Systemd: # vim /etc/containerd/config.toml找到containerd.runtimes.runc.options,添加SystemdCgroup = true所有节点将sandbox_image的Pause镜像改成符合自己版本的地址registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6: 所有节点启动Containerd,并配置开机自启动: # systemctl daemon-reload# systemctl enable --now containerd 所有节点配置crictl客户端连接的运行时位置: # cat > /etc/crictl.yaml < K8s组件安装 所有节点安装1.23最新版本kubeadm、kubelet和kubectl:# yum install kubeadm-1.23* kubelet-1.23* kubectl-1.23* -y 更改Kubelet的配置使用Containerd作为Runtime: # cat >/etc/sysconfig/kubelet< 设置Kubelet开机自启动: # systemctl daemon-reload# systemctl enable --now kubelet 高可用实现 所有Master节点通过yum安装HAProxy和KeepAlived: yum install keepalived haproxy -y 所有Master节点配置HAProxy: [root@k8s-master01 etc]# mkdir /etc/haproxy[root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30sdefaults log global mode option timeout connect 5000 timeout client 50000 timeout server 50000 timeout 15s timeout 15sfrontend monitor-in bind *:33305 mode option monitor-uri /monitorfrontend k8s-master bind 0.0.0.0:16443 bind 127.0.0.1:16443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-masterbackend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 10.0.0.201:6443 check server k8s-master02 10.0.0.202:6443 check server k8s-master03 10.0.0.203:6443 check 所有Master节点配置KeepAlived, Master01节点的配置:[root@k8s-master01 etc]# mkdir /etc/keepalived[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalivedglobal_defs { router_id LVS_DEVELscript_user root enable_script_security}vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1}vrrp_instance VI_1 { state MASTER interface ens33 mcast_src_ip 10.0.0.201 virtual_router_id 51 priority 101 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 10.0.0.236 } track_script { chk_apiserver }} Master02节点的配置:! Configuration File for keepalivedglobal_defs { router_id LVS_DEVELscript_user root enable_script_security}vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1}vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 10.0.0.202 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 10.0.0.236 } track_script { chk_apiserver }} Master03节点的配置:! Configuration File for keepalivedglobal_defs { router_id LVS_DEVELscript_user root enable_script_security}vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1}vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 10.0.0.203 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 10.0.0.236 } track_script { chk_apiserver }} 所有master节点配置KeepAlived健康检查文件: [root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh #!/bin/basherr=0for k in $(seq 1 3)do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 1 continue else err=0 break fidoneif [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1else exit 0fi 启动haproxy和keepalived: [root@k8s-master01 keepalived]# systemctl daemon-reload[root@k8s-master01 keepalived]# systemctl enable --now haproxy[root@k8s-master01 keepalived]# systemctl enable --now keepalived 高薪K8s全栈架构师视频点我了解,提供完善并且免费的售后服务、免费更新、免费技术问答、直击年薪30万! 来都来了点我看看吧~ 集群初始化 Master01初始化 vim kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- groups: - system:bootstrappers:kubeadm:default-node-token token: 7t2weq.bjbawausm0jaxury ttl: 24h0m0s usages: - signing - authenticationkind: InitConfigurationlocalAPIEndpoint: advertiseAddress: 10.0.0.201 bindPort: 6443nodeRegistration: criSocket: /run/containerd/containerd.sock name: k8s-master01 taints: - effect: NoSchedule key: node-role.kubernetes.io/master---apiServer: certSANs: - 10.0.0.236 timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta2certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrolPlaneEndpoint: 10.0.0.236:16443controllerManager: {}dns: type: CoreDNSetcd: local: dataDir: /var/lib/etcdimageRepository: registry.cn-hangzhou.aliyuncs.com/google_containerskind: ClusterConfigurationkubernetesVersion: v1.23.1 # 更改此处的版本号和kubeadm version一致networking: dnsDomain: cluster.local podSubnet: 172.16.0.0/12 serviceSubnet: 192.168.0.0/16scheduler: {} 更新kubeadm文件: kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml 将new.yaml文件复制到其他master节点: for i in k8s-master02 k8s-master03; do scp new.yaml $i:/root/; done 所有Master节点提前下载镜像: kubeadm config images pull --config /root/new.yaml Master01节点初始化: kubeadm init --config /root/new.yaml --upload-certs 初始化成功后信息如下: kubeadm join 10.0.0.236:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \ --control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94cPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.0.0.236:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 Master01节点配置环境变量,用于访问Kubernetes集群: cat < 添加Master节点 使用上述初始化命令生产的join命令添加即可: kubeadm join 10.0.0.236:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \ --control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94c 添加Worker节点 kubeadm join 10.0.0.236:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 CNI插件Calico安装 只在master01执行: cd /root/k8s-ha-install && git checkout manual-installation-v1.23.x && cd calico/ 修改Pod网段: POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`sed -i "s#POD_CIDR#${POD_SUBNET}#g" calico.yamlkubectl apply -f calico.yaml 创建完成后,等待几分钟后查看状态: Metrics Server部署 将Master01节点的front-proxy-ca.crt复制到所有Node节点: scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crtscp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node(其他节点自行拷贝):/etc/kubernetes/pki/front-proxy-ca.crt 安装metrics server: cd /root/k8s-ha-install/kubeadm-metrics-server# kubectl create -f comp.yaml serviceaccount/metrics-server createdclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrole.rbac.authorization.k8s.io/system:metrics-server createdrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader createdclusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator createdclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server createdservice/metrics-server createddeployment.apps/metrics-server createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created 查看Metrics Server状态: 状态正常后,查看度量指标: # kubectl top nodeNAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 153m 3% 1701Mi 44% k8s-master02 125m 3% 1693Mi 44% k8s-master03 129m 3% 1590Mi 41% k8s-node01 73m 1% 989Mi 25% k8s-node02 64m 1% 950Mi 24% # kubectl top po -ANAMESPACE NAME CPU(cores) MEMORY(bytes) kube-system calico-kube-controllers-66686fdb54-74xkg 2m 17Mi kube-system calico-node-6gqpb 21m 85Mi kube-system calico-node-bmvjt 29m 76Mi kube-system calico-node-hdp9c 15m 82Mi kube-system calico-node-wwrfv 23m 86Mi kube-system calico-node-zzv88 22m 84Mi kube-system calico-typha-67c6dc57d6-hj6l4 2m 23Mi kube-system calico-typha-67c6dc57d6-jm855 2m 22Mi kube-system coredns-7d89d9b6b8-sr6mf 1m 16Mi kube-system coredns-7d89d9b6b8-xqwjk 1m 16Mi kube-system etcd-k8s-master01 24m 96Mi kube-system etcd-k8s-master02 20m 91Mi kube-system etcd-k8s-master03 21m 92Mi kube-system kube-apiserver-k8s-master01 41m 502Mi kube-system kube-apiserver-k8s-master02 35m 476Mi kube-system kube-apiserver-k8s-master03 71m 480Mi kube-system kube-controller-manager-k8s-master01 15m 65Mi kube-system kube-controller-manager-k8s-master02 1m 26Mi kube-system kube-controller-manager-k8s-master03 2m 27Mi kube-system kube-proxy-8lt45 1m 18Mi kube-system kube-proxy-d6jfh 1m 18Mi kube-system kube-proxy-hfnvz 1m 19Mi kube-system kube-proxy-nsms8 1m 18Mi kube-system kube-proxy-xmlhq 3m 21Mi kube-system kube-scheduler-k8s-master01 2m 26Mi kube-system kube-scheduler-k8s-master02 2m 24Mi kube-system kube-scheduler-k8s-master03 2m 24Mi kube-system metrics-server-d54b585c4-4dqpf 46m 16Mi 高薪K8s全栈架构师视频点我了解,提供完善并且免费的售后服务、免费更新、免费技术问答、直击年薪30万! 来都来了点我看看吧~ Dashboard部署 安装 cd /root/k8s-ha-install/dashboard/[root@k8s-master01 dashboard]# kubectl create -f .serviceaccount/admin-user createdclusterrolebinding.rbac.authorization.k8s.io/admin-user creatednamespace/kubernetes-dashboard createdserviceaccount/kubernetes-dashboard createdservice/kubernetes-dashboard createdsecret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-csrf createdsecret/kubernetes-dashboard-key-holder createdconfigmap/kubernetes-dashboard-settings createdrole.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createddeployment.apps/kubernetes-dashboard createdservice/dashboard-metrics-scraper createddeployment.apps/dashboard-metrics-scraper created 登录Dashboard 查看dashboard端口号: # kubectl get svc kubernetes-dashboard -n kubernetes-dashboard 查看管理员Token: [root@k8s-master01 1.1.1]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')Name: admin-user-token-r4vcpNamespace: kube-systemLabels: 通过任意宿主机+端口号即可访问Dashboard: 高薪K8s全栈架构师视频点我了解,提供完善并且免费的售后服务、免费更新、免费技术问答、直击年薪30万! 来都来了点我看看吧~ 参考:K8s架构师视频K8s技术交流群:
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~