实验:kubeadm方式来部署k8s

网友投稿 274 2022-09-12

实验:kubeadm方式来部署k8s

环境:master01:192.168.206.3 docker、kubeadm、kubelet、kubectl、flannelnode01:192.168.206.5 docker、kubeadm、kubelet、kubectl、flannelnode02:192.168.206.6 docker、kubeadm、kubelet、kubectl、flannelHarbor节点:192.168.206.14 (hub.kgc.com) docker、docker-compose、harbor-offline-v1.2.2

1、在所有节点上安装Docker和kubeadm2、部署Kubernetes Master3、部署容器网络插件4、部署 Kubernetes Node,将节点加入Kubernetes集群中5、部署 Dashboard Web 页面,可视化查看Kubernetes资源6、部署 Harbor 私有仓库,存放镜像资源```html/xml环境准备 //所有节点,关闭防火墙规则,关闭selinux,关闭swap交换systemctl stop firewalldsystemctl disable firewalldsetenforce 0iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -Xswapoff -a #交换分区必须要关闭sed -ri 's/.swap./#&/' /etc/fstab #永久关闭swap分区,&符号在sed命令中代表上次匹配的结果#加载 ip_vs 模块for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

//修改主机名hostnamectl set-hostname master01hostnamectl set-hostname node01hostnamectl set-hostname node02

//所有节点修改hosts文件vim /etc/hosts192.168.206.3 master01192.168.206.5 node01192.168.206.6 node02

//调整内核参数cat > /etc/sysctl.d/kubernetes.conf << EOFnet.bridge.bridge-nf-call-ip6tables=1net.bridge.bridge-nf-call-iptables=1net.ipv6.conf.all.disable_ipv6=1net.ipv4.ip_forward=1EOF

//生效参数sysctl --system

**演示一个节点** ![1.png](https://s2./images/20220325/1648206058775181.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.1.png](https://s2./images/20220325/1648206063125413.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.11.png](https://s2./images/20220325/1648206067485231.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.2.png](所有节点安装docker --------------------yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo yum install -y docker-ce docker-ce-cli containerd.io

mkdir /etc/dockercat > /etc/docker/daemon.json <

systemctl daemon-reloadsystemctl restart docker.servicesystemctl enable docker.service

docker info | grep "Cgroup Driver"Cgroup Driver: systemd

演示一个节点 ![1.png](https://s2./images/20220325/1648206752943257.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.1.png](https://s2./images/20220325/1648206757125721.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.2.png](https://s2./images/20220325/1648206759645639.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.3.png](https://s2./images/20220325/1648206762597448.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.4.png](所有节点安装kubeadm,kubelet和kubectl --------------------//定义kubernetes源cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl=install -y kubelet-1.20.11 kubeadm-1.20.11 kubectl-1.20.11

//开机自启kubeletsystemctl enable kubelet.service#K8S通过kubeadm安装出来以后都是以Pod方式存在,即底层是以容器方式运行,所以kubelet必须设置开机自启

![1.png](https://s2./images/20220325/1648207016814847.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.1.png](部署K8S集群 --------------------

#所有节点都要完成kubeadm init \--apiserver-advertise-address=192.168.206.3 \--image-repository registry.aliyuncs.com/google_containers \--kubernetes-version v1.20.0 \--service-cidr=10.1.0.0/16 \--pod-network-cidr=10.244.0.0/16

。。。。。。。。。。。。。。。。。。kubeadm join 192.168.206.3:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:3244085b37cbe02bb13f2184d34c5bdd5c6de81c69b8a776312a605cc791d6c0

。。。。。。。。。。。。。。。。

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

#初始化后需要修改 kube-proxy 的 configmap,开启 ipvskubectl edit cm kube-proxy -n=kube-system#修改mode: ipvs

//如果 kubectl get cs 发现集群不健康,更改以下两个文件vim /etc/kubernetes/manifests/kube-scheduler.yaml vim /etc/kubernetes/manifests/kube-controller-manager.yaml

修改如下内容

把--bind-address=127.0.0.1变成--bind-address=192.168.80.10 #修改成k8s的控制节点master01的ip把--port=0

# 搜索port=0,把这一行注释掉

systemctl restart kubelet

![1.png](https://s2./images/20220325/1648208153620779.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.1.png](https://s2./images/20220325/1648208155394534.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.2.png](https://s2./images/20220325/1648208158488798.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.21.png](https://s2./images/20220325/1648208160744454.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.3.png](https://s2./images/20220325/1648208163124491.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.4.png](https://s2./images/20220325/1648208164569811.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.41.png](https://s2./images/20220325/1648208167899612.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.6.png](https://s2./images/20220325/1648209602227378.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.7.png](https://s2./images/20220325/1648209605505908.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1.5.png](https://s2./images/20220325/1648208169473695.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1111.png](https://s2./images/20220325/1648214502423179.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![222.png](https://s2./images/20220325/1648214504602202.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![333.png](https://s2./images/20220325/1648214506828767.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![1111.png](https://s2./images/20220325/1648213079462133.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ```html/xml //所有节点部署网络插件flannel 方法一: //所有节点上传flannel镜像 flannel.tar 到 /opt 目录,master节点上传 kube-flannel.yml 文件 cd /opt scp -r flannel.tar root@node01:/opt scp -r flannel.tar root@node02:/opt docker load -i flannel.tar //在 master 节点创建 flannel 资源 kubectl apply -f kube-flannel.yml //在 node 节点上执行 kubeadm join 命令加入群集 kubeadm join 192.168.206.3:6443 --token 20oxr9.pt0cifb5zazx3yqh \ --discovery-token-ca-cert-hash sha256:b33b6e5c046f76b47a65d0b41084d2ab6550dd75cc65480cbc15189370ec61a0 //在master节点查看节点状态 kubectl get nodes //测试 pod 资源创建 kubectl create deployment nginx --image=nginx kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-554b9c67f9-zr2xs 1/1 Running 0 14m 10.244.1.2 node01 //暴露端口提供服务 kubectl expose deployment nginx --port=80 --type=NodePort kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 25h nginx NodePort 10.96.15.132 80:32698/TCP 4s //测试访问 curl http://node01:32698 //扩展3个副本 kubectl scale deployment nginx --replicas=3 kubectl get pods -o wide ............................... NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-554b9c67f9-9kh4s 1/1 Running 0 66s 10.244.1.3 node01 nginx-554b9c67f9-rv77q 1/1 Running 0 66s 10.244.2.2 node02 nginx-554b9c67f9-zr2xs 1/1 Running 0 17m 10.244.1.2 node01 ...................................

```html/xml

------------------------------ 部署 Dashboard ------------------------------//在 master01 节点上操作#上传 recommended.yaml和dashboard.tar 文件到 /opt/k8s 目录中docker load -i dashboard.tardocker load -i metrics-scraper.tar

scp dashboard.tar metrics-scraper.tar root@node01:/optscp dashboard.tar metrics-scraper.tar root@node02:/optnode也加载

cd /opt/k8svim recommended.yaml#默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:kind: ServiceapiVersion: v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardspec:ports:

port: 443targetPort: 8443nodePort: 30001 #添加type: NodePort #添加selector:k8s-app: kubernetes-dashboard

kubectl apply -f recommended.yaml

#创建service account并绑定默认cluster-admin管理员集群角色kubectl create serviceaccount dashboard-admin -n kube-systemkubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-adminkubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

#使用输出的token登录Dashboardhttps://NodeIP:30001

![2.png](https://s2./images/20220326/1648261799399308.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![2.1.png](https://s2./images/20220326/1648261802423152.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![2.11.png](https://s2./images/20220326/1648261804504271.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)![2.2.png](https://s2./images/20220326/1648261807979371.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![2.3.png](https://s2./images/20220326/1648261815905944.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![2.31.png](https://s2./images/20220326/1648261821713318.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![2.4.png](https://s2./images/20220326/1648261818821051.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![2.5.png](https://s2./images/20220326/1648261824734911.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![2.6.png](https://s2./images/20220326/1648261826390446.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=) ![2.7.png](https://s2./images/20220326/1648261830743088.png?x-oss-process=image/watermark,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:k8s源码学习-资源转换器
下一篇:生生不息,持续增长!小米营销亮相BDGF 2021!
相关文章

 发表评论

暂时没有评论,来抢沙发吧~