docker导出日志到本地的方法是什么
251
2022-09-28
kubernetes 集群实操系列:节点维护(删除、新增)
实操准备
树莓派k8s集群
root@pi-master01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
pi-master01 Ready master 348d v1.15.10 192.168.5.11
本实操将以分别以pi-master02和pi-node01为操作节点,演示删除和新增节点的操作过程和效果。
删除节点
本过程将删除pi-master02和pi-node01,其中pi-master02和pi-node01分别为master节点和node节点。
删除master
在pi-master01节点驱逐pi-master02上部署的pod
root@pi-master01:~# kubectl drain pi-master02 --delete-local-data --force --ignore-daemonsets node/pi-master02 cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-arm64-79kjw, kube-system/kube-proxy-mw422, monitoring/arm-exporter-dmbch, monitoring/node-exporter-24p2m node/pi-master02 drained
在pi-master01节点删除pi-master02
root@pi-master01:~# kubectl delete node pi-master02 node "pi-master02" deleted
在pi-master01节点验证是否删除pi-master02
root@pi-master01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
pi-master01 Ready master 381d v1.15.10 192.168.5.11
在pi-master02节点进行重置
root@pi-master02:~# kubeadm reset
删除node
在pi-master01节点驱逐pi-node01上部署的pod
root@pi-master01:~# kubectl drain pi-node01 --delete-local-data --force --ignore-daemonsets node/pi-node01 cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-arm64-zpdrx, kube-system/kube-proxy-jj6rv, monitoring/arm-exporter-dmj8m, monitoring/node-exporter-wmpfw evicting pod "prometheus-operator-7d578bdb5b-sz7vm" evicting pod "mysqld-monitor216-859bcb94f9-t5k9n" evicting pod "kube-state-metrics-6d766b45f4-zmwth" pod/kube-state-metrics-6d766b45f4-zmwth evicted pod/prometheus-operator-7d578bdb5b-sz7vm evicted pod/mysqld-monitor216-859bcb94f9-t5k9n evicted node/pi-node01 evicted
在pi-master01节点删除pi-node01
root@pi-master01:~# kubectl delete node pi-node01 node "pi-node01" deleted
在pi-master01节点验证是否删除pi-node01
root@pi-master01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
pi-master01 Ready master 381d v1.15.10 192.168.5.11
在pi-node01节点进行重置
root@pi-node01:~# kubeadm reset
新增节点
本过程将把删除的pi-master02和pi-node01重新添加到集群,其中pi-master02和pi-node01仍然以master节点和node节点加入。在pi-master01节点生成k8s节点加入命令
root@pi-master01:~# kubeadm token create --print-join-command kubeadm join 192.168.5.3:9443 --token ek7ous.udlgt7svc39q3ds1 --discovery-token-ca-cert-hash sha256:c9d6aa507dc7cb4ffcae10e89b64ac752d37f5f5ee869230d3a023ebb1bf8d89
新增master
在pi-master01节点打印作为master节点需要的cert信息
root@pi-master01:~# kubeadm init phase upload-certs --upload-certs I0425 14:27:26.416291 29570 version.go:248] remote version is much newer: v1.23.6; falling back to: stable-1.15 [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: b66a19121100813a573ad75b9c02bf159cbcfa80a0bc5037bea7d95f3028cf5d
在pi-master02节点操作
root@pi-master02:~# kubeadm join 192.168.5.3:9443 --token ek7ous.udlgt7svc39q3ds1 --discovery-token-ca-cert-hash sha256:c9d6aa507dc7cb4ffcae10e89b64ac752d37f5f5ee869230d3a023ebb1bf8d89 --control-plane --certificate-key b66a19121100813a573ad75b9c02bf159cbcfa80a0bc5037bea7d95f3028cf5d
在pi-master01节点验证新增pi-master02
root@pi-master01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
pi-master01 Ready master 381d v1.15.10 192.168.5.11
新增node
在pi-node01节点操作
root@pi-node01:~# kubeadm join 192.168.5.3:9443 --token ek7ous.udlgt7svc39q3ds1 --discovery-token-ca-cert-hash sha256:c9d6aa507dc7cb4ffcae10e89b64ac752d37f5f5ee869230d3a023ebb1bf8d89 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在pi-master01节点验证新增pi-node01
root@pi-master01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
pi-master01 Ready master 382d v1.15.10 192.168.5.11
补充
打roles标签
root@pi-master01:~# kubectl label node pi-master02 node-role.kubernetes.io/master= root@pi-master01:~# kubectl label node pi-node01 node-role.kubernetes.io/node=
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~