linux cpu占用率如何看
272
2022-10-28
Kubernetes 1.18 三master高可用集群部署实施记录
四台虚拟机,角色分别为master0~2,node0操作系统:centos71804配置:4核cpu,6G内存,两块60G硬盘192.168.20.196 —— master0192.168.20.197 —— master1192.168.20.198 —— master2192.168.20.199 —— node0192.168.20.200 —— VIP【本次部署是演示性部署,没有配置静态IP;进行测试实验时建议为4台虚拟机配置静态IP】配置4台虚拟机的yum源、docker源和kubernetes源,建议个人使用网易的源,如果网络条件较好可以使用阿里的源配置4台虚拟机的主机环境:防火墙、SELinux、NTP、crontab、SWAP、透明网桥、hostname、/etc/hosts文件、SSH免密互登在4台虚拟机上安装和配置docker在4台虚拟机上安装kubeadm,kubelet和kubectl在master0~2上安装keepalived+lvs,并适当修改配置文件/etc/keepalived/keepalived.conf依次在master0~2上以非抢占模式启动keepalived在master0上执行kubeadm init并通过"--image-repository"指定自己常用的镜像仓库地址记下kubeadm join192.168.20.200:6443 --token 信息,或者在添加节点时用 kubeadm token list查看--token内容网络方案采用支持Network Policy的Canal,执行get --namespace=kube-system pod -o wide|grep canal查看Canal的部署结果【注意:Canal这个项目本身已经停止维护了,Canal其实就是Flannel和Calico的组合,在get nodes和kubectl get pods -n kube-system把master0节点的证书拷贝到master1和master2上,在master1和master2上创建证书存放路径:cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/在master0上执行:scp /etc/kubernetes/pki/ca.crt master1:/etc/kubernetes/pki/scp /etc/kubernetes/pki/ca.key master1:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.key master1:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.pub master1:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.crt master1:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.key master1:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.crt master1:/etc/kubernetes/pki/etcd/scp /etc/kubernetes/pki/etcd/ca.key master1:/etc/kubernetes/pki/etcd/scp /etc/kubernetes/pki/ca.crt master2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.key master2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.key master2:/etc/kubernetes/pki/scp /etc/kubernetes/pki/sa.pub master2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.crt master2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.key master2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.crt master2:/etc/kubernetes/pki/etcd/scp /etc/kubernetes/pki/etcd/ca.key master2:/etc/kubernete/pki/etcd/将master1和master2加入集群,在master1和master2上执行:kubeadm join192.168.20.200:6443 --token 7de7h55rnwluq.x6nypjrhl --discovery-token-ca-cert-hash sha256:fa75619ab50a9dbda9aa6c89828c2c0bb627312634650299fe1647ab510a7e6c --control-plane在master1和master2上执行:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g)$HOME/.kube/config执行kubectl get nodes查看节点角色及状态把node0加入到集群,在node0上执行:kubeadm join 192.168.20.200:6443 --token 7de7h55rnwluq.x6nypjrhl --discovery-token-ca-cert-hash sha256:fa75619ab50a9dbda9aa6c89828c2c0bb627312634650299fe1647ab510a7e6c在master0上执行kubectl get nodes查看集群节点状态安装helm,使用helm部署反向代理/负载均衡工具traefik使用helm部署GUI管理工具kubernetes-dashboard使用Helm部署集群监控Prometheus Operator【注意:如果是大型测试环境或者生产环境,建议配置容器健康监测Liveness和Readiness】
孟伯,20200522
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~