二进制部署K8S集群

网友投稿 256 2022-09-09

二进制部署K8S集群

文章目录

​​1. 部署步骤​​​​2. 环境规划​​

​​2.1. 修改主机名​​​​2.2. 添加主机名解析​​​​2.3. 关闭selinux和防火墙​​​​2.4. 节点安装docker,配置国内镜像加速源​​

​​3. 自签TLS证书​​

​​3.1. 安装证书生成工具cfssl​​​​3.2. 生成CA证书​​​​3.3. 生成Server证书​​​​3.4. 生成admin证书​​​​3.5. 生成kube-proxy证书​​​​3.6. 删除不用的文件​​

​​4. 部署etcd集群​​

​​4.1. 下载etcd-v3.2.29二进制包(Master节点操作)​​​​4.2. 写etcd的配置文件​​​​4.3. 写etcd.service​​​​4.4. 启动etcd​​​​4.5. 配置ssh免密登录​​​​4.6. 将文件拷贝到两个节点​​​​4.7. 在节点上修改etcd配置文件​​​​4.8. 添加环境变量(master)​​​​4.9. 验证集群​​

​​5. 部署Flannel网络​​

​​5.1. 下载flannel二进制包,并复制到节点​​​​5.2. 写入分配的子网段到etcd,供flanneld使用​​​​5.3. 编写flanneld配置文件(在节点操作)​​​​5.4. 编写flanneld.service配置文件​​​​5.5. 修改docker.service配置文件​​​​5.6. 启动服务​​​​5.7. 查看网卡(docker0和flannel.1处于同一网络)​​​​5.8. 将配置文件拷入另一节点,执行相同的操作​​

​​6. 部署master节点组件​​

​​6.1. master二进制包下载:​​​​6.2. 创建TLS Bootstrapping Token​​​​6.3. apiserver.sh脚本​​​​6.4. controller-manager.sh脚本​​​​6.5. scheduler.sh脚本​​​​6.6. 执行脚本​​​​6.7. 查看master集群状态​​

​​7. 创建node节点的kubeconfig文件​​

​​7.1. 指定访问入口​​​​7.2. 创建kubelet kubeconfig​​

​​7.2.1. 设置集群参数​​​​7.2.2. 设置客户端认证参数​​​​7.2.3. 设置上下文参数​​​​7.2.4. 设置默认上下文​​

​​7.3. 创建kube-proxy kubeconfig​​

​​7.3.1. 创建kube-proxy kubeconfig文件​​

​​7.4. 将bootstrap.kubeconfig kube-proxy.kubeconfig拷贝到所有节点​​

​​8. 部署node节点​​

​​8.1. 添加角色权限​​​​8.2. 将kubelet kube-proxy发送到node节点​​​​8.3. 在node1上操作,编写kubelet.sh脚本​​​​8.4. 编写proxy.sh脚本​​​​8.5. 执行脚本​​​​8.6. 查看csr列表​​​​8.7. 授权后状态变为Approved​​​​8.8. 将脚本发送到node2节点​​​​8.9. 去node2上执行脚本​​​​8.10. 到master上授权​​​​8.11. 查看node集群节点信息​​

​​9. 运行一个测试实例检查集群状态​​

​​9.1. 暴露端口使外部可以访问​​​​9.2. 从外部通过浏览器访问(访问任意节点均可)​​

参考视频:部署步骤

2. 环境规划

master——172.16.38.208node1——172.16.38.174node2——172.16.38.234

2.1. 修改主机名

hostnamectl set-hostname masterhostnamectl set-hostname node1hostnamectl set-hostname node2

2.2. 添加主机名解析

echo "172.16.38.174 node1" >>/etc/hostsecho "172.16.38.234 node1" >>/etc/hosts

2.3. 关闭selinux和防火墙

systemctl disable firewalldsystemctl stop firewalld

2.4. 节点安装docker,配置国内镜像加速源

3. 自签TLS证书

3.1. 安装证书生成工具cfssl

mkdir sslcd ssl/wget +x ./*mv cfssl_linux-amd64 /usr/local/bin/cfssl #生成证书mv cfssljson_linux-amd64 /usr/local/bin/cfssljson #支持jsonmv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo #查看证书信息#cfssl print-defaults config > config.json #生成证书模板#cfssl print-defaults csr > csr.json #生成证书的基本信息

3.2. 生成CA证书

vim ca-config.json

{ "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } }}

vim ca-csr.json

{ "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ]}

生成ca证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

3.3. 生成Server证书

vim server-csr.json

{ "CN": "kubernetes", "hosts": [ "127.0.0.1", "172.16.38.208", "172.16.38.174", "172.16.38.234", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ]}

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json |cfssljson -bare server

3.4. 生成admin证书

vim admin-csr.json

{ "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:masters", "OU": "System" } ]}

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json |cfssljson -bare admin

3.5. 生成kube-proxy证书

{ "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ]}

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json |cfssljson -bare kube-proxy

3.6. 删除不用的文件

ls |grep -v pem |xargs rm -rf

4. 部署etcd集群

mkdir -p /opt/kubernetes/{bin,cfg,ssl}

4.1. 下载etcd-v3.2.29二进制包(Master节点操作)

下载地址:xvf etcd-v3.2.29-linux-amd64.tar.gzmv etcd-v3.2.29-linux-amd64/{etcd,etcdctl} /opt/kubernetes/bin/cp ssl/ca*.pem ssl/server*.pem /opt/kubernetes/ssl/

4.2. 写etcd的配置文件

cat > /opt/kubernetes/cfg/etcd << EOFEOF

4.3. 写etcd.service

cat > /usr/lib/systemd/system/etcd.service << EOFEOF

4.4. 启动etcd

systemctl daemon-reloadsystemctl enable etcdsystemctl start etcd #会卡住,按Ctrl+c结束ps -ef |grep etcd

4.5. 配置ssh免密登录

ssh-keygen -t rsa #一路回车ssh-copy-id root@172.16.38.174 #拷贝公钥ssh-copy-id root@172.16.38.234

4.6. 将文件拷贝到两个节点

rsync -avzP /opt/kubernetes node1:/opt/rsync -avzP /opt/kubernetes node2:/opt/scp /usr/lib/systemd/system/etcd.service node1:/usr/lib/systemd/system/scp /usr/lib/systemd/system/etcd.service node2:/usr/lib/systemd/system/

4.7. 在节点上修改etcd配置文件

node1节点

#[Member]ETCD_NAME="etcd02"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="start etcdnode2节点

#[Member]ETCD_NAME="etcd03"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="start etcd

4.8. 添加环境变量(master)

echo "PATH=$PATH:/opt/kubernetes/bin" >>/etc/profilesource /etc/profile

4.9. 验证集群

cd /opt/kubernetes/ssletcdctl --ca-file=ca.pem \--cert-file=server.pem \--key-file=server-key.pem \--endpoints="\cluster-health

5. 部署Flannel网络

5.1. 下载flannel二进制包,并复制到节点

wget xvf flannel-v0.9.1-linux-amd64.tar.gzscp flanneld mk-docker-opts.sh node1:/opt/kubernetes/bin/scp flanneld mk-docker-opts.sh node2:/opt/kubernetes/bin/

5.2. 写入分配的子网段到etcd,供flanneld使用

cd /opt/kubernetes/ssl/etcdctl --ca-file=ca.pem \--cert-file=server.pem \--key-file=server-key.pem \--endpoints="\set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan" }}'

5.3. 编写flanneld配置文件(在节点操作)

cat > /opt/kubernetes/cfg/flanneld << EOFFLANNEL_OPTIONS="--etcd-endpoints=-etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"EOF

5.4. 编写flanneld.service配置文件

cat > /usr/lib/systemd/system/flanneld.service <

5.5. 修改docker.service配置文件

修改两个地方

EnvironmentFile=/run/flannel/subnet.envExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS

5.6. 启动服务

systemctl daemon-reloadsystemctl start flanneldsystemctl enable flanneldsystemctl restart docker

5.7. 查看网卡(docker0和flannel.1处于同一网络)

5.8. 将配置文件拷入另一节点,执行相同的操作

scp cfg/flanneld 172.16.38.234:/opt/kubernetes/cfg/scp /usr/lib/systemd/system/{docker.service,flanneld.service} 172.16.38.234:/usr/lib/systemd/system/systemctl daemon-reloadsystemctl start flanneldsystemctl enable flanneldsystemctl restart docker

6. 部署master节点组件

6.1. master二进制包下载:

wget xvf kubernetes-server-linux-amd64.tar.gzmv kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} /opt/kubernetes/bin/chmod +x /opt/kubernetes/bin/*

6.2. 创建TLS Bootstrapping Token

cd /opt/kubernetes/cfg/export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')cat > token.csv <

6.3. apiserver.sh脚本

vim /opt/kubernetes/bin/apiserver.sh

#!/bin/bashMASTER_ADDRESS=${1:-"192.168.1.195"}ETCD_SERVERS=${2:-"</opt/kubernetes/cfg/kube-apiserverKUBE_APISERVER_OPTS="--logtostderr=true \\--v=4 \\--etcd-servers=${ETCD_SERVERS} \\--insecure-bind-address=127.0.0.1 \\--bind-address=${MASTER_ADDRESS} \\--insecure-port=8080 \\--secure-port=6443 \\--advertise-address=${MASTER_ADDRESS} \\--allow-privileged=true \\--service-cluster-ip-range=10.10.10.0/24 \\--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \\--kubelet-\\--enable-bootstrap-token-auth \\--token-auth-file=/opt/kubernetes/cfg/token.csv \\--service-node-port-range=30000-50000 \\--tls-cert-file=/opt/kubernetes/ssl/server.pem \\--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\--client-ca-file=/opt/kubernetes/ssl/ca.pem \\--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\--etcd-cafile=/opt/kubernetes/ssl/ca.pem \\--etcd-certfile=/opt/kubernetes/ssl/server.pem \\--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"EOFcat </usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=\$KUBE_APISERVER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kube-apiserversystemctl restart kube-apiserver

6.4. controller-manager.sh脚本

vim /opt/kubernetes/bin/controller-manager.sh

#!/bin/bashMASTER_ADDRESS=${1:-"127.0.0.1"}cat </opt/kubernetes/cfg/kube-controller-managerKUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\--v=4 \\--master=${MASTER_ADDRESS}:8080 \\--leader-elect=true \\--address=127.0.0.1 \\--service-cluster-ip-range=10.10.10.0/24 \\--cluster-name=kubernetes \\--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\--root-ca-file=/opt/kubernetes/ssl/ca.pem"EOFcat </usr/lib/systemd/system/kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=\$KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kube-controller-managersystemctl restart kube-controller-manager

6.5. scheduler.sh脚本

vim /opt/kubernetes/bin/scheduler.sh

#!/bin/bashMASTER_ADDRESS=${1:-"127.0.0.1"}cat </opt/kubernetes/cfg/kube-schedulerKUBE_SCHEDULER_OPTS="--logtostderr=true \\--v=4 \\--master=${MASTER_ADDRESS}:8080 \\--leader-elect"EOFcat </usr/lib/systemd/system/kube-scheduler.service[Unit]Description=Kubernetes SchedulerDocumentation=\$KUBE_SCHEDULER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kube-schedulersystemctl restart kube-scheduler

6.6. 执行脚本

cd /opt/kubernetes/bin/chmod +x *.sh./apiserver.sh 172.16.38.208 查看master集群状态

[root@master bin]# kubectl get cs

7. 创建node节点的kubeconfig文件

7.1. 指定访问入口

export KUBE_APISERVER="创建kubelet kubeconfig

7.2.1. 设置集群参数

cd /opt/kubernetes/sslkubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig

7.2.2. 设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig

7.2.3. 设置上下文参数

kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig

7.2.4. 设置默认上下文

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

7.3. 创建kube-proxy kubeconfig

7.3.1. 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfigcp /root/ssl/kube-proxy* /opt/kubernetes/ssl/kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfigkubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfigkubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

7.4. 将bootstrap.kubeconfig kube-proxy.kubeconfig拷贝到所有节点

mv bootstrap.kubeconfig kube-proxy.kubeconfig ../cfg/scp ../cfg/{bootstrap.kubeconfig,kube-proxy.kubeconfig} node1:/opt/kubernetes/cfg/scp ../cfg/{bootstrap.kubeconfig,kube-proxy.kubeconfig} node2:/opt/kubernetes/cfg/

8. 部署node节点

8.1. 添加角色权限

[root@master ~]# kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap

8.2. 将kubelet kube-proxy发送到node节点

[root@master ~]# scp kubernetes/server/bin/{kubelet,kube-proxy} node1:/opt/kubernetes/bin/[root@master ~]# scp kubernetes/server/bin/{kubelet,kube-proxy} node2:/opt/kubernetes/bin/

8.3. 在node1上操作,编写kubelet.sh脚本

vim /opt/kubernetes/bin/kubelet.sh

#!/bin/bashNODE_ADDRESS=${1:-"192.168.1.196"}DNS_SERVER_IP=${2:-"10.10.10.2"}cat </opt/kubernetes/cfg/kubeletKUBELET_OPTS="--logtostderr=true \\--v=4 \\--address=${NODE_ADDRESS} \\--hostname-override=${NODE_ADDRESS} \\--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\--cert-dir=/opt/kubernetes/ssl \\--allow-privileged=true \\--cluster-dns=${DNS_SERVER_IP} \\--cluster-domain=cluster.local \\--fail-swap-on=false \\--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"EOFcat </usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=-/opt/kubernetes/cfg/kubeletExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTSRestart=on-failureKillMode=process[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kubeletsystemctl restart kubelet

8.4. 编写proxy.sh脚本

vim /opt/kubernetes/bin/proxy.sh

#!/bin/bashNODE_ADDRESS=${1:-"192.168.1.200"}cat </opt/kubernetes/cfg/kube-proxyKUBE_PROXY_OPTS="--logtostderr=true \--v=4 \--hostname-override=${NODE_ADDRESS} \--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"EOFcat </usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes ProxyAfter=network.target[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-proxyExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kube-proxysystemctl restart kube-proxy

8.5. 执行脚本

chmod +x *.sh./kubelet.sh 172.16.38.174 10.10.10.2./proxy.sh 172.16.38.174

8.6. 查看csr列表

[root@master ~]# kubectl get csr

8.7. 授权后状态变为Approved

[root@master ~]# kubectl certificate approve node-csr-x4F5fniCL-kj0F_Dl-g2RKUWESv3kKC6nS7J-ZrE81U

8.8. 将脚本发送到node2节点

[root@node1 bin]# scp kubelet.sh proxy.sh 172.16.38.234:/opt/kubernetes/bin/

8.9. 去node2上执行脚本

[root@node2 bin]# ./kubelet.sh 172.16.38.234 10.10.10.2[root@node2 bin]# ./proxy.sh 172.16.38.234

8.10. 到master上授权

[root@master ~]# kubectl get csr[root@master ~]# kubectl certificate approve node-csr-XjrmhFhj9gGdryQGduOvlA3eJ0THSXWiyRbcTpjyUeo

8.11. 查看node集群节点信息

[root@master ~]# kubectl get nodes

9. 运行一个测试实例检查集群状态

kubectl run nginx --image=nginx --replicas=3

[root@master ~]# kubectl get pod -o wide

9.1. 暴露端口使外部可以访问

kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePortkubectl get svc

9.2. 从外部通过浏览器访问(访问任意节点均可)

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:K8S之Service详解
下一篇:嘉年华2.0营销背后,北京现代危中思变?
相关文章

 发表评论

暂时没有评论,来抢沙发吧~