k8s二进制1.20安装

网友投稿 331 2022-09-08

k8s二进制1.20安装

​​第一章 安装说明 2​​​

​​第二章 集群安装 2​​​

​​2.1 包下载 2​​​

​​2.2 基本环境配置 2​​​

​​2.3 安装ansible工具 3​​​

​​2.4 内核升级 5​​​

​​第三章 基本组件安装 7​​​

​​3.1 Docker安装 8​​​

​​3.2 K8s及etcd安装 8​​​

​​第四章 生成证书 9​​​

​​4.1.1 etcd证书 9​​​

​​4.1.2 k8s组件证书 10​​​

​​4.1.3 生成apiserver证书 11​​​

​​4.1.4 生成Requestheader-client-xxx requestheader-allowwd-xxx:aggerator 11​​​

​​4.1.5 生成controller-manager的证书 11​​​

​​4.1.6 生成scheduler证书 12​​​

​​4.1.7 kubernetes-admin 13​​​

​​4.1.8 创建ServiceAccount Key à secret 13​​​

​​第五章 Kubernetes系统组件配置 14​​​

​​5.1 Etcd配置 14​​​

​​5.1.1 Master01 14​​​

​​5.1.2 Master02 15​​​

​​5.1.3 Master03 16​​​

​​5.1.4 创建Service 17​​​

​​第六章 Kubernetes组件配置 18​​​

​​6.1 Apiserver 18​​​

​​6.2 ControllerManager 22​​​

​​6.3 Scheduler 23​​​

​​第七章 TLS Bootstrapping配置 24​​​

​​第八章 Node节点配置 25​​​

​​8.1 复制证书 25​​​

​​8.2 Kubelet配置 25​​​

​​8.3 kube-proxy配置 28​​​

​​第九章 安装Calico 29​​​

​​第十章 安装CoreDNS 30​​​

​​10.1 安装对应版本(推荐) 30​​​

​​10.2 安装最新版CoreDNS 31​​​

​​第十一章 安装Metrics Server 31​​​

​​第十二章 集群验证 32​​​

​​第十三章 安装dashboard 33​​​

​​13.1 Dashboard部署 33​​​

​​13.1.1 安装指定版本dashboard 33​​​

​​13.1.2 安装最新版 33​​​

​​13.1.3 登录dashboard 34​​​

​​第十四章 生产环境关键性配置 37​​​

安装说明

本文章将演示CentOS 7二进制方式安装高可用k8s 1.17+,相对于其他版本,二进制安装方式并无太大区别,只需要区分每个组件版本的对应关系即可。生产环境中,建议使用小版本大于5的Kubernetes版本,比如1.19.5以后的才可用于生产环境。​

规划​

角色

ip

组件

k8s-master01

192.168.1.26

etcd apiservice scheduler manager kubelet kube-proxy

k8s-node01

192.168.1.74

kubelet kube-proxy

k8s-node02

192.168.1.197

kubelet kube-proxy

Pod 网段:在kube-controller-manager.service配置文件的-cluster-cidr参数指定

clusterCIDR: 10.244.0.0/16 kube-proxy和 kube-controller-manager service

Service网段:192.168.1.26

Node网段:宿主机网段kube-apiserver的配置文件中advertise-address 10.206.16.4/20

kube-apiserver dns网段地址

集群安装

包下载

所有节点新建目录 ​

mkdir k8s-ha-install​

[root@k8s-master01 k8s-ha-install]# ls -l |awk '{print $9}'​

bootstrap​

calico​

cfssl-certinfo_linux-amd64​

cfssljson_linux-amd64​

cfssl_linux-amd64​

CoreDNS​

csi-hostpath​

dashboard​

etcd-v3.4.13-linux-amd64.tar.gz​

kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm​

kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm​

kube-proxy​

kubernetes-server-linux-amd64.tar.gz​

metrics-server-0.4.x​

metrics-server-0.4.x-kubeadm​

pki​

snapshotter​

基本环境配置

hostnamectl set-hostname k8s-node02​

所有节点配置/etc/hosts​

192.168.0.107 k8s-master01 # 2C2G 40G​

192.168.0.108 k8s-master02 # 2C2G 40G​

192.168.0.109 k8s-master03 # 2C2G 40G​

192.168.0.236 k8s-master-lb # VIP 虚IP不占用机器资源# 如果不是高可用集群,该IP为Master01的IP​

192.168.0.110 k8s-node01 # 2C2G 40G​

192.168.0.111 k8s-node02 # 2C2G 40G​

K8s Service网段:10.96.0.0/12​

K8s Pod网段:192.168.0.0/12​

注意​

宿主机网段、K8s Service网段、Pod网段不能重复,具体看课程资料的【安装前必看】集群安装网段划分

节点免密登录

ssh-keygen

cd .ssh​

for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i ~/.ssh/id_rsa.pub $i;done​

CentOS 7安装yum源如下:​

curl -o /etc/yum.repos.d/CentOS-Base.repo ​​install -y yum-utils device-mapper-persistent-data lvm2​

yum-config-manager --add-repo ​​-i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo​

将yum配置文件同步到其他节点

cd /etc/yum.repos.d​

scp -r * root@k8s-worker2:/etc/yum.repos.d/​

安装ansible工具

yum -y install ansible

修改ansible配置添加hosts主机清单

[all:vars]

ansible_ssh_user=root

ansible_ssh_pass="Tcdn@2007"

ansible_ssh_port=22

[tdsql]

172.18.0.2

172.18.0.9

172.18.0.11

172.18.0.17

必备工具安装

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y​

所有节点关闭firewalld 、dnsmasq、selinux(CentOS7需要关闭NetworkManager,CentOS8不需要)​

systemctl disable --now NetworkManager && systemctl disable --now firewalld ​

setenforce 0​

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux​

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config​

所有节点关闭swap分区,fstab注释swap​

swapoff -a && sysctl -w vm.swappiness=0 && sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab​

所有节点同步时间​

安装ntpdate​

rpm -ivh install ntpdate -y​

所有节点同步时间。时间同步配置如下:

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime​

echo 'Asia/Shanghai' >/etc/timezone​

ntpdate time2.aliyun.com​

# 加入到crontab​

*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com​

所有节点配置limit:

ulimit -SHn 65535​

vim /etc/security/limits.conf​

# 末尾添加如下内容​

* soft nofile 655360​

* hard nofile 131072​

* soft nproc 655350​

* hard nproc 655350​

* soft memlock unlimited​

* hard memlock unlimited​​

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do scp /etc/security/limits.conf $i:/etc/security/limits.conf; done ​

Master01下载安装文件​

[root@k8s-master01 ~]# cd /root/ ; git clone into 'k8s-ha-install'...​

remote: Enumerating objects: 12, done.​

remote: Counting objects: 100% (12/12), done.​

remote: Compressing objects: 100% (11/11), done.​

remote: Total 461 (delta 2), reused 5 (delta 1), pack-reused 449​

Receiving objects: 100% (461/461), 19.52 MiB | 4.04 MiB/s, done.​

Resolving deltas: 100% (163/163), done.​

切换分支​

cd k8s-ha-install && git checkout manual-installation-v1.20.x​

所有节点升级系统并重启,此处升级没有升级内核,下节会单独升级内核:

yum update -y --exclude=kernel* && reboot #CentOS7需要升级,CentOS8可以按需升级系统​

内核升级

CentOS7 需要升级内核至4.18+,本地升级的版本为4.19

从master01节点传到其他节点:​

将升级包发送到其他节点​

所有节点上创建目录​

mkdir -p /root/k8s-ha-install​

master节点上​

cd /root/k8s-ha-install​

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/k8s-ha-install ; done​

所有节点安装内核​

所有节点安装内核​

ansible -i /etc/ansible/hosts tdsql -m shell -a "cd /root/k8s-ha-install && yum localinstall -y kernel-ml*"​

所有节点更改内核启动顺序​

ansible -i /etc/ansible/hosts tdsql -m shell -a "grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg"​

ansible -i /etc/ansible/hosts tdsql -m shell -a "grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)""​

检查默认内核是不是4.19

ansible -i /etc/ansible/hosts tdsql -m shell -a "grubby --default-kernel"​

ansible -i /etc/ansible/hosts tdsql -m shell -a "reboot"​

所有节点重启,然后检查内核是不是4.19​

ansible -i /etc/ansible/hosts tdsql -m shell -a " uname -r"​

所有节点安装ipvsadm:

yum install ipvsadm ipset sysstat conntrack libseccomp -y​

所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:

modprobe -- ip_vs​

modprobe -- ip_vs_rr​

modprobe -- ip_vs_wrr​

modprobe -- ip_vs_sh​

modprobe -- nf_conntrack​

vim /etc/modules-load.d/ipvs.conf ​

# 加入以下内容​

ip_vs​

ip_vs_lc​

ip_vs_wlc​

ip_vs_rr​

ip_vs_wrr​

ip_vs_lblc​

ip_vs_lblcr​

ip_vs_dh​

ip_vs_sh​

ip_vs_fo​

ip_vs_nq​

ip_vs_sed​

ip_vs_ftp​

ip_vs_sh​

nf_conntrack​

ip_tables​

ip_set​

xt_set​

ipt_set​

ipt_rpfilter​

ipt_REJECT​

ipip​

将文件传送到其他节点​

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do scp /etc/modules-load.d/ipvs.conf $i:/etc/modules-load.d/ipvs.conf; done ​

加载生效​

ansible -i /etc/ansible/hosts tdsql -m shell -a "systemctl enable --now systemd-modules-load.service"​

检查是否加载:

[root@k8s-master01 ~]# lsmod | grep -e ip_vs -e nf_conntrack​

nf_conntrack_ipv4 16384 23 ​

nf_defrag_ipv4 16384 1 nf_conntrack_ipv4​

nf_conntrack 135168 10 xt_conntrack,nf_conntrack_ipv6,nf_conntrack_ipv4,nf_nat,nf_nat_ipv6,ipt_MASQUERADE,nf_nat_ipv4,xt_nat,nf_conntrack_netlink,ip_vs​

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:

cat < /etc/sysctl.d/k8s.conf​

net.ipv4.ip_forward = 1​

net.bridge.bridge-nf-call-iptables = 1​

net.bridge.bridge-nf-call-ip6tables = 1​

fs.may_detach_mounts = 1​

vm.overcommit_memory=1​

vm.panic_on_oom=0​

fs.inotify.max_user_watches=89100​

fs.file-max=52706963​

fs.nr_open=52706963​

net.netfilter.nf_conntrack_max=2310720​

net.ipv4.tcp_keepalive_time = 600​

net.ipv4.tcp_keepalive_probes = 3​

net.ipv4.tcp_keepalive_intvl =15​

net.ipv4.tcp_max_tw_buckets = 36000​

net.ipv4.tcp_tw_reuse = 1​

net.ipv4.tcp_max_orphans = 327680​

net.ipv4.tcp_orphan_retries = 3​

net.ipv4.tcp_syncookies = 1​

net.ipv4.tcp_max_syn_backlog = 16384​

net.ipv4.ip_conntrack_max = 65536​

net.ipv4.tcp_max_syn_backlog = 16384​

net.ipv4.tcp_timestamps = 0​

net.core.somaxconn = 16384​

EOF​

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do scp /etc/sysctl.d/k8s.conf $i:/etc/sysctl.d/k8s.conf; done ​

ansible -i /etc/ansible/hosts tdsql -m shell -a "sysctl -p"​

所有节点配置完内核后,重启服务器,保证重启后内核依旧加载​

reboot​

lsmod | grep --color=auto -e ip_vs -e nf_conntrack​

基本组件安装

本节主要安装的是集群中用到的各种组件,比如Docker-ce、Kubernetes各组件等。

Docker安装

所有节点安装Docker-ce 19.03

yum install docker-ce-19.03.* -y​

温馨提示:​

由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd

所有节点新建目录​

ansible -i /etc/ansible/hosts tdsql -m shell -a " mkdir -p /etc/docker"​

cat > /etc/docker/daemon.json <

{​

"exec-opts": ["native.cgroupdriver=systemd"]​

}​

EOF​

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do scp /etc/docker/daemon.json $i:/etc/docker/daemon.json; done ​

所有节点设置开机自启动Docker:

systemctl daemon-reload && systemctl enable --now docker​

K8s及etcd安装

Master01下载kubernetes安装包​

以下操作都在master01执行​

解压kubernetes安装文件​

tar -xf kubernetes-server-linux-amd64.tar.gz --strip-compnotallow=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}​

解压etcd安装文件​

tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz --strip-compnotallow=1 -C /usr/local/bin etcd-v3.4.13-linux-amd64/etcd{,ctl}​

版本查看​

[root@k8s-master01 ~]# kubelet --version​

Kubernetes v1.20.0​

[root@k8s-master01 ~]# etcdctl version​

etcdctl version: 3.4.13​

API version: 3.4 ​

将组件发送到其他节点​

MasterNodes='k8s-master02 k8s-master03'​

WorkNodes='k8s-node01 k8s-node02'​

for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done​

for NODE in $WorkNodes; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done​

所有节点创建/opt/cni/bin目录​

mkdir -p /opt/cni/bin​

生成证书

二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的​

Master01下载生成证书工具​

cd /root/k8s-ha-install​

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64​

mv cfssl_linux-amd64 /usr/local/bin/cfssl​

mv cfssljson_linux-amd64 /usr/local/bin/cfssljson​

mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo​

chmod 775 /usr/local/bin/cfssl​

chmod 775 /usr/local/bin/cfssljson​

chmod 775 /usr/bin/cfssl-certinfo​

etcd证书

所有Master节点创建etcd证书目录​

mkdir /etc/etcd/ssl -p​

所有节点创建kubernetes相关目录​

mkdir -p /etc/kubernetes/pki​

Master01节点生成etcd证书​

生成证书的CSR文件:证书签名请求文件,配置了一些域名、公司、单位​

cd /root/k8s-ha-install/pki​

# 生成etcd CA证书和CA证书的key​

cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca​

cfssl gencert \​

-ca=/etc/etcd/ssl/etcd-ca.pem \​

-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \​

-cnotallow=ca-config.json \​

-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.0.107,192.168.0.108,192.168.0.109 \ -profile=kubernetes \​

etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd​

执行结果​

将证书复制到其他节点​

MasterNodes='k8s-master02 k8s-master03'​

WorkNodes='k8s-node01 k8s-node02'​

for NODE in $MasterNodes; do​

ssh $NODE "mkdir -p /etc/etcd/ssl"​

for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do​

scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}​

done​

done​

k8s组件证书

Master01生成kubernetes证书​

[root@k8s-master01 pki]# cd /root/k8s-ha-install/pki​

cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca​

生成apiserver证书

# 10.96.0.是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.96.0.1,

# 如果不是高可用集群,192.168.0.236为Master01的IP

cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -cnotallow=ca-config.json -hostname=10.96.0.1,192.168.0.236,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.0.107,192.168.0.108,192.168.0.109 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver​

生成Requestheader-client-xxx requestheader-allowwd-xxx:aggerator

cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca ​

cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -cnotallow=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client​

返回结果(忽略警告)

生成controller-manager的证书

cfssl gencert \​

-ca=/etc/kubernetes/pki/ca.pem \​

-ca-key=/etc/kubernetes/pki/ca-key.pem \​

-cnotallow=ca-config.json \​

-profile=kubernetes \​

manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager​

# 注意,如果不是高可用集群,192.168.0.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443​

# set-cluster:设置一个集群项,​

kubectl config set-cluster kubernetes \​

--certificate-authority=/etc/kubernetes/pki/ca.pem \​

--embed-certs=true \​

--server=\​

--kubecnotallow=/etc/kubernetes/controller-manager.kubeconfig​

# 设置一个环境项,一个上下文​

kubectl config set-context system:kube-controller-manager@kubernetes \​

--cluster=kubernetes \​

--user=system:kube-controller-manager \​

--kubecnotallow=/etc/kubernetes/controller-manager.kubeconfig​

# set-credentials 设置一个用户项​

kubectl config set-credentials system:kube-controller-manager \​

--client-certificate=/etc/kubernetes/pki/controller-manager.pem \​

--client-key=/etc/kubernetes/pki/controller-manager-key.pem \​

--embed-certs=true \​

--kubecnotallow=/etc/kubernetes/controller-manager.kubeconfig​

# 使用某个环境当做默认环境​

kubectl config use-context system:kube-controller-manager@kubernetes \​

--kubecnotallow=/etc/kubernetes/controller-manager.kubeconfig​

生成scheduler证书

cfssl gencert \​

-ca=/etc/kubernetes/pki/ca.pem \​

-ca-key=/etc/kubernetes/pki/ca-key.pem \​

-cnotallow=ca-config.json \​

-profile=kubernetes \​

scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler​

# 注意,如果不是高可用集群,192.168.0.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443​

kubectl config set-cluster kubernetes \​

--certificate-authority=/etc/kubernetes/pki/ca.pem \​

--embed-certs=true \​

--server=\​

--kubecnotallow=/etc/kubernetes/scheduler.kubeconfig​

kubectl config set-credentials system:kube-scheduler \​

--client-certificate=/etc/kubernetes/pki/scheduler.pem \​

--client-key=/etc/kubernetes/pki/scheduler-key.pem \​

--embed-certs=true \​

--kubecnotallow=/etc/kubernetes/scheduler.kubeconfig​

kubectl config set-context system:kube-scheduler@kubernetes \​

--cluster=kubernetes \​

--user=system:kube-scheduler \​

--kubecnotallow=/etc/kubernetes/scheduler.kubeconfig​

kubectl config use-context system:kube-scheduler@kubernetes \​

--kubecnotallow=/etc/kubernetes/scheduler.kubeconfig​

kubernetes-admin

cfssl gencert \​

-ca=/etc/kubernetes/pki/ca.pem \​

-ca-key=/etc/kubernetes/pki/ca-key.pem \​

-cnotallow=ca-config.json \​

-profile=kubernetes \​

admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin​

# 注意,如果不是高可用集群,192.168.0.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443​

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=--kubecnotallow=/etc/kubernetes/admin.kubeconfig​

kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubecnotallow=/etc/kubernetes/admin.kubeconfig​

kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubecnotallow=/etc/kubernetes/admin.kubeconfig​

kubectl config use-context kubernetes-admin@kubernetes --kubecnotallow=/etc/kubernetes/admin.kubeconfig​

创建ServiceAccount Key à secret

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048​

返回结果​

Generating RSA private key, 2048 bit long modulus (2 primes)​

...................................................................................+++++​

...............+++++​

e is 65537 (0x010001)​

openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub​

发送证书至其他节点

for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};done;done​

查看证书文件​

[root@k8s-master01 pki]# ls /etc/kubernetes/pki/​

admin.csr ​

apiserver.csr ​

ca.csr ​

controller-manager.csr ​

front-proxy-ca.csr ​

front-proxy-client.csr ​

sa.key ​

scheduler-key.pem​

admin-key.pem ​

apiserver-key.pem ​

ca-key.pem ​

controller-manager-key.pem ​

front-proxy-ca-key.pem ​

front-proxy-client-key.pem ​

sa.pub ​

scheduler.pem​

admin.pem ​

apiserver.pem ​

ca.pem ​

controller-manager.pem ​

front-proxy-ca.pem ​

front-proxy-client.pem ​

scheduler.csr​

[root@k8s-master01 pki]# ls /etc/kubernetes/pki/ |wc -l​

23​

Kubernetes系统组件配置

Etcd配置

etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址​

Master01

vim /etc/etcd/etcd.config.yml

name: 'k8s-master01'​

data-dir: /var/lib/etcd​

wal-dir: /var/lib/etcd/wal​

snapshot-count: 5000​

heartbeat-interval: 100​

election-timeout: 1000​

quota-backend-bytes: 0​

listen-peer-urls: ''3​

max-wals: 5​

cors:​

initial-advertise-peer-urls: '''proxy'​

discovery-proxy:​

discovery-srv:​

initial-cluster: 'k8s-master01='etcd-k8s-cluster'​

initial-cluster-state: 'new'​

strict-reconfig-check: false​

enable-v2: true​

enable-pprof: true​

proxy: 'off'​

proxy-failure-wait: 5000​

proxy-refresh-interval: 30000​

proxy-dial-timeout: 1000​

proxy-write-timeout: 5000​

proxy-read-timeout: 0​

client-transport-security:​

cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'​

key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'​

client-cert-auth: true​

trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'​

auto-tls: true​

peer-transport-security:​

cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'​

key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'​

peer-client-cert-auth: true​

trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'​

auto-tls: true​

debug: false​

log-package-levels:​

log-outputs: [default]​

force-new-cluster: false​

Master02

vim /etc/etcd/etcd.config.yml

name: 'k8s-master02'​

data-dir: /var/lib/etcd​

wal-dir: /var/lib/etcd/wal​

snapshot-count: 5000​

heartbeat-interval: 100​

election-timeout: 1000​

quota-backend-bytes: 0​

listen-peer-urls: ''3​

max-wals: 5​

cors:​

initial-advertise-peer-urls: '''proxy'​

discovery-proxy:​

discovery-srv:​

initial-cluster: 'k8s-master01='etcd-k8s-cluster'​

initial-cluster-state: 'new'​

strict-reconfig-check: false​

enable-v2: true​

enable-pprof: true​

proxy: 'off'​

proxy-failure-wait: 5000​

proxy-refresh-interval: 30000​

proxy-dial-timeout: 1000​

proxy-write-timeout: 5000​

proxy-read-timeout: 0​

client-transport-security:​

cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'​

key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'​

client-cert-auth: true​

trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'​

auto-tls: true​

peer-transport-security:​

cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'​

key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'​

peer-client-cert-auth: true​

trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'​

auto-tls: true​

debug: false​

log-package-levels:​

log-outputs: [default]​

force-new-cluster: false​

Master03

vim /etc/etcd/etcd.config.yml

name: 'k8s-master03'​

data-dir: /var/lib/etcd​

wal-dir: /var/lib/etcd/wal​

snapshot-count: 5000​

heartbeat-interval: 100​

election-timeout: 1000​

quota-backend-bytes: 0​

listen-peer-urls: ''3​

max-wals: 5​

cors:​

initial-advertise-peer-urls: '''proxy'​

discovery-proxy:​

discovery-srv:​

initial-cluster: 'k8s-master01='etcd-k8s-cluster'​

initial-cluster-state: 'new'​

strict-reconfig-check: false​

enable-v2: true​

enable-pprof: true​

proxy: 'off'​

proxy-failure-wait: 5000​

proxy-refresh-interval: 30000​

proxy-dial-timeout: 1000​

proxy-write-timeout: 5000​

proxy-read-timeout: 0​

client-transport-security:​

cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'​

key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'​

client-cert-auth: true​

trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'​

auto-tls: true​

peer-transport-security:​

cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'​

key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'​

peer-client-cert-auth: true​

trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'​

auto-tls: true​

debug: false​

log-package-levels:​

log-outputs: [default]​

force-new-cluster: false​

创建Service

所有Master节点创建etcd service并启动​

vim /usr/lib/systemd/system/etcd.service

[Unit]​

Descriptinotallow=Etcd Service​

Documentatinotallow=--config-file=/etc/etcd/etcd.config.yml​

Restart=on-failure​

RestartSec=10​

LimitNOFILE=65536​

[Install]​

WantedBy=multi-user.target​

Alias=etcd3.service​

scp /usr/lib/systemd/system/etcd.service k8s-master02:/usr/lib/systemd/system/etcd.service

scp /usr/lib/systemd/system/etcd.service k8s-master03:/usr/lib/systemd/system/etcd.service

所有Master节点创建etcd的证书目录

mkdir /etc/kubernetes/pki/etcd​

ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/​

systemctl daemon-reload && systemctl enable --now etcd​

查看etcd状态​

export ETCDCTL_API=3​

etcdctl --endpoints="192.168.0.109:2379,192.168.0.108:2379,192.168.0.107:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table​

curl .

curl -s | jq . | grep leader

Kubernetes组件配置

所有节点创建相关目录​

mkdir -p /etc/kubernetes/manifests/ ​

mkdir -p /etc/systemd/system/kubelet.service.d ​

mkdir -p /var/lib/kubelet ​

mkdir -p /var/log/kubernetes​

Apiserver

所有Master节点创建kube-apiserver service,# 注意,如果不是高可用集群,192.168.0.236改为master01的地址

Master01配置

注意本文档使用的k8s service网段为10.96.0.0/12,该网段不能和宿主机的网段、Pod网段的重复,请按需修改​

cat /usr/lib/systemd/system/kube-apiserver.service ​

[Unit]​

Descriptinotallow=Kubernetes API Server​

Documentatinotallow=\​

--v=2 \​

--logtostderr=true \​

--allow-privileged=true \​

--bind-address=0.0.0.0 \​

--secure-port=6443 \​

--insecure-port=0 \​

--advertise-address=192.168.0.107 \​

--service-cluster-ip-range=10.96.0.0/12 \​

--service-node-port-range=30000-32767 \​

--etcd-servers=\​

--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \​

--etcd-certfile=/etc/etcd/ssl/etcd.pem \​

--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \​

--client-ca-file=/etc/kubernetes/pki/ca.pem \​

--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \​

--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \​

--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \​

--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \​

--service-account-key-file=/etc/kubernetes/pki/sa.pub \​

--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \​

--service-account-issuer=\​

--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \​

--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \​

--authorization-mode=Node,RBAC \​

--enable-bootstrap-token-auth=true \​

--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \​

--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \​

--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \​

--requestheader-allowed-names=aggregator \​

--requestheader-group-headers=X-Remote-Group \​

--requestheader-extra-headers-prefix=X-Remote-Extra- \​

--requestheader-username-headers=X-Remote-User​

# --token-auth-file=/etc/kubernetes/token.csv​

Restart=on-failure​

RestartSec=10s​

LimitNOFILE=65535​

[Install]​

WantedBy=multi-user.target​

Master02配置

注意本文档使用的k8s service网段为10.96.0.0/12,该网段不能和宿主机的网段、Pod网段的重复,请按需修改​

[root@k8s-master01 pki]# cat /usr/lib/systemd/system/kube-apiserver.service ​

[Unit]​

Descriptinotallow=Kubernetes API Server​

Documentatinotallow=\​

--v=2 \​

--logtostderr=true \​

--allow-privileged=true \​

--bind-address=0.0.0.0 \​

--secure-port=6443 \​

--insecure-port=0 \​

--advertise-address=192.168.0.108 \​

--service-cluster-ip-range=10.96.0.0/12 \​

--service-node-port-range=30000-32767 \​

--etcd-servers=\​

--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \​

--etcd-certfile=/etc/etcd/ssl/etcd.pem \​

--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \​

--client-ca-file=/etc/kubernetes/pki/ca.pem \​

--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \​

--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \​

--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \​

--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \​

--service-account-key-file=/etc/kubernetes/pki/sa.pub \​

--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \​

--service-account-issuer=\​

--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \​

--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \​

--authorization-mode=Node,RBAC \​

--enable-bootstrap-token-auth=true \​

--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \​

--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \​

--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \​

--requestheader-allowed-names=aggregator \​

--requestheader-group-headers=X-Remote-Group \​

--requestheader-extra-headers-prefix=X-Remote-Extra- \​

--requestheader-username-headers=X-Remote-User​

# --token-auth-file=/etc/kubernetes/token.csv​

Restart=on-failure​

RestartSec=10s​

LimitNOFILE=65535​

[Install]​

WantedBy=multi-user.target​

Master03配置

注意本文档使用的k8s service网段为10.96.0.0/12,该网段不能和宿主机的网段、Pod网段的重复,请按需修改​

cat /usr/lib/systemd/system/kube-apiserver.service ​

[Unit]​

Descriptinotallow=Kubernetes API Server​

Documentatinotallow=\​

--v=2 \​

--logtostderr=true \​

--allow-privileged=true \​

--bind-address=0.0.0.0 \​

--secure-port=6443 \​

--insecure-port=0 \​

--advertise-address=192.168.0.109 \​

--service-cluster-ip-range=10.96.0.0/12 \​

--service-node-port-range=30000-32767 \​

--etcd-servers=\​

--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \​

--etcd-certfile=/etc/etcd/ssl/etcd.pem \​

--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \​

--client-ca-file=/etc/kubernetes/pki/ca.pem \​

--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \​

--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \​

--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \​

--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \​

--service-account-key-file=/etc/kubernetes/pki/sa.pub \​

--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \​

--service-account-issuer=\​

--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \​

--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \​

--authorization-mode=Node,RBAC \​

--enable-bootstrap-token-auth=true \​

--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \​

--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \​

--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \​

--requestheader-allowed-names=aggregator \​

--requestheader-group-headers=X-Remote-Group \​

--requestheader-extra-headers-prefix=X-Remote-Extra- \​

--requestheader-username-headers=X-Remote-User​

# --token-auth-file=/etc/kubernetes/token.csv​

Restart=on-failure​

RestartSec=10s​

LimitNOFILE=65535​

[Install]​

WantedBy=multi-user.target​

启动apiserver

所有Master节点开启kube-apiserver​

systemctl daemon-reload && systemctl enable --now kube-apiserver​

检测kube-server状态​

# systemctl status kube-apiserver​

● kube-apiserver.service - Kubernetes API Server​

Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)​

Active: active (running) since Sat 2020-08-22 21:26:49 CST; 26s ago ​

系统日志的这些提示可以忽略​

Dec 11 20:51:15 k8s-master01 kube-apiserver: I1211 20:51:15.004739 7450 clientconn.go:948] ClientConn switching balancer to "pick_first"​

Dec 11 20:51:15 k8s-master01 kube-apiserver: I1211 20:51:15.004843 7450 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc011bd4c80, {CONNECTING }​

Dec 11 20:51:15 k8s-master01 kube-apiserver: I1211 20:51:15.010725 7450 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc011bd4c80, {READY }​

Dec 11 20:51:15 k8s-master01 kube-apiserver: I1211 20:51:15.011370 7450 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"​

ControllerManager

所有Master节点配置kube-controller-manager service​

注意本文档使用的k8s Pod网段为172.16.0.0/12,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改​

[root@k8s-master01 pki]# cat /usr/lib/systemd/system/kube-controller-manager.service​

[Unit]​

Descriptinotallow=Kubernetes Controller Manager​

Documentatinotallow=\​

--v=2 \​

--logtostderr=true \​

--address=127.0.0.1 \​

--root-ca-file=/etc/kubernetes/pki/ca.pem \​

--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \​

--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \​

--service-account-private-key-file=/etc/kubernetes/pki/sa.key \​

--kubecnotallow=/etc/kubernetes/controller-manager.kubeconfig \​

--leader-elect=true \​

--use-service-account-credentials=true \​

--node-monitor-grace-period=40s \​

--node-monitor-period=5s \​

--pod-eviction-timeout=2m0s \​

--cnotallow=*,bootstrapsigner,tokencleaner \​

--allocate-node-cidrs=true \​

--cluster-cidr=172.16.0.0/12 \​

--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \​

--node-cidr-mask-size=24​

Restart=always​

RestartSec=10s​

[Install]​

WantedBy=multi-user.target​

所有Master节点启动kube-controller-manager​

systemctl daemon-reload && systemctl enable --now kube-controller-manager​

查看启动状态

systemctl status kube-controller-manager​

● kube-controller-manager.service - Kubernetes Controller Manager​

Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)​

Active: active (running) since Fri 2020-12-11 20:53:05 CST; 8s ago​

Docs: PID: 7518 (kube-controller)​

Scheduler

所有Master节点配置kube-scheduler service​

[root@k8s-master01 pki]# cat /usr/lib/systemd/system/kube-scheduler.service ​

[Unit]​

Descriptinotallow=Kubernetes Scheduler​

Documentatinotallow=\​

--v=2 \​

--logtostderr=true \​

--address=127.0.0.1 \​

--leader-elect=true \​

--kubecnotallow=/etc/kubernetes/scheduler.kubeconfig​

Restart=always​

RestartSec=10s​

[Install]​

WantedBy=multi-user.target​

systemctl daemon-reload && systemctl enable --now kube-scheduler​

systemctl status kube-scheduler​

TLS Bootstrapping配置

在Master01创建bootstrap​

# 注意,如果不是高可用集群,192.168.0.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443

cd /root/k8s-ha-install/bootstrap​

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=--kubecnotallow=/etc/kubernetes/bootstrap-kubelet.kubeconfig​

kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubecnotallow=/etc/kubernetes/bootstrap-kubelet.kubeconfig​

kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubecnotallow=/etc/kubernetes/bootstrap-kubelet.kubeconfig​

kubectl config use-context tls-bootstrap-token-user@kubernetes --kubecnotallow=/etc/kubernetes/bootstrap-kubelet.kubeconfig​

注意:如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保证下图红圈内的字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致​

mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config​

kubectl create -f bootstrap.secret.yaml ​

Node节点配置

复制证书

Master01节点复制证书至Node节点​

cd /etc/kubernetes/​

for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do​

ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/ssl​

for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do​

scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/​

done​

for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do​

scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}​

done​

done​

Kubelet配置

所有节点创建相关目录​

mkdir -p /var/lib/kubelet ​

mkdir -p /var/log/kubernetes​

mkdir -p /etc/systemd/system/kubelet.service.d​

mkdir –p /etc/kubernetes/manifests/​

所有节点配置kubelet service​

vim /usr/lib/systemd/system/kubelet.service​

[Unit]​

Descriptinotallow=Kubernetes Kubelet​

Documentatinotallow=i in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do scp /usr/lib/systemd/system/kubelet.service $i:/usr/lib/systemd/system/kubelet.service; done ​

所有节点配置kubelet service的配置文件

vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf​

[Service]​

Envirnotallow="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubecnotallow=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubecnotallow=/etc/kubernetes/kubelet.kubeconfig"​

Envirnotallow="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"​

Envirnotallow="KUBELET_CONFIG_ARGS=--cnotallow=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"​

Envirnotallow="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "​

ExecStart=​

ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS​

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do scp /etc/systemd/system/kubelet.service.d/10-kubelet.conf $i:/etc/systemd/system/kubelet.service.d/10-kubelet.conf; done​

创建kubelet的配置文件​

注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.96.0.10​

vim /etc/kubernetes/kubelet-conf.yml​

apiVersion: kubelet.config.k8s.io/v1beta1​

kind: KubeletConfiguration​

address: 0.0.0.0​

port: 10250​

readOnlyPort: 10255​

authentication:​

anonymous:​

enabled: false​

webhook:​

cacheTTL: 2m0s​

enabled: true​

x509:​

clientCAFile: /etc/kubernetes/pki/ca.pem​

authorization:​

mode: Webhook​

webhook:​

cacheAuthorizedTTL: 5m0s​

cacheUnauthorizedTTL: 30s​

cgroupDriver: systemd​

cgroupsPerQOS: true​

clusterDNS:​

- 10.96.0.10​

clusterDomain: cluster.local​

containerLogMaxFiles: 5​

containerLogMaxSize: 10Mi​

contentType: application/vnd.kubernetes.protobuf​

cpuCFSQuota: true​

cpuManagerPolicy: none​

cpuManagerReconcilePeriod: 10s​

enableControllerAttachDetach: true​

enableDebuggingHandlers: true​

enforceNodeAllocatable:​

- pods​

eventBurst: 10​

eventRecordQPS: 5​

evictionHard:​

imagefs.available: 15%​

memory.available: 100Mi​

nodefs.available: 10%​

nodefs.inodesFree: 5%​

evictionPressureTransitionPeriod: 5m0s​

failSwapOn: true​

fileCheckFrequency: 20s​

hairpinMode: promiscuous-bridge​

healthzBindAddress: 127.0.0.1​

healthzPort: 10248​

20s​

imageGCHighThresholdPercent: 85​

imageGCLowThresholdPercent: 80​

imageMinimumGCAge: 2m0s​

iptablesDropBit: 15​

iptablesMasqueradeBit: 14​

kubeAPIBurst: 10​

kubeAPIQPS: 5​

makeIPTablesUtilChains: true​

maxOpenFiles: 1000000​

maxPods: 110​

nodeStatusUpdateFrequency: 10s​

oomScoreAdj: -999​

podPidsLimit: -1​

registryBurst: 10​

registryPullQPS: 5​

resolvConf: /etc/resolv.conf​

rotateCertificates: true​

runtimeRequestTimeout: 2m0s​

serializeImagePulls: true​

staticPodPath: /etc/kubernetes/manifests​

streamingConnectionIdleTimeout: 4h0m0s​

syncFrequency: 1m0s​

volumeStatsAggPeriod: 1m0s​

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do scp /etc/kubernetes/kubelet-conf.yml $i:/etc/kubernetes/kubelet-conf.yml; done​

启动所有节点kubelet​

systemctl daemon-reload && systemctl enable --now kubelet​

#查看状态及日志

systemctl restart kubelet.service

systemctl status kubelet.service -l

journalctl -xeu kubele

此时系统日志/var/log/messages​

Unable to update cni config: no networks found in /etc/cni/net.d 显示只有如下信息为正常​

查看集群状态​

[root@k8s-master01 bootstrap]# kubectl get node​

kube-proxy配置

# 注意,如果不是高可用集群,192.168.0.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443​

以下操作在Master01执行(注意在同一个窗口执行,因为定义的有变量)​

cd /root/k8s-ha-installkubectl -n kube-system create serviceaccount kube-proxy​

kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier ​

--serviceaccount kube-system:kube-proxy​

获取相关变量​

SECRET=$(kubectl -n kube-system get sa/kube-proxy --output=jsnotallow='{.secrets[0].name}')​

JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET --output=jsnotallow='{.data.token}' | base64 -d)​

PKI_DIR=/etc/kubernetes/pki​

K8S_DIR=/etc/kubernetes​

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=--kubecnotallow=${K8S_DIR}/kube-proxy.kubeconfig​

kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubecnotallow=/etc/kubernetes/kube-proxy.kubeconfig​

kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubecnotallow=/etc/kubernetes/kube-proxy.kubeconfig​

kubectl config use-context kubernetes --kubecnotallow=/etc/kubernetes/kube-proxy.kubeconfig​

在master01将kube-proxy的systemd文件发送到其他节点​

如果更改了集群Pod的网段,需要更改kube-proxy/kube-proxy.conf的clusterCIDR: 172.16.0.0/12参数为pod的网段。​

cd /root/k8s-ha-install​

PKI_DIR=/etc/kubernetes/pki​

K8S_DIR=/etc/kubernetes​

for NODE in k8s-master01 k8s-master02 k8s-master03; do​

scp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig​

scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf​

scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service​

done​

for NODE in k8s-node01 k8s-node02; do​

scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig​

scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf​

scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service​

done​

注意修改地址​

/etc/kubernetes/kube-proxy.conf​

所有节点启动kube-proxy​

systemctl daemon-reload && systemctl enable --now kube-proxy​

systemctl status kube-proxy -l​

journalctl -xeu kube-proxy

安装Calico

Calico的安装请必须听视频课程和最后一章升级Calico的视频​

以下步骤只在master01执行​

cd /root/k8s-ha-install/calico/​

修改calico-etcd.yaml的以下位置

cat calico-etcd.yaml |grep etcd_endpoints

sed -i 's#etcd_endpoints: ""calico-etcd.yaml​

ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`​

ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`​

ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`​

sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml​

sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml​

cat calico-etcd.yaml |grep etcd_ca

cat calico-etcd.yaml |grep etcd-key

# 更改此处为自己的pod网段​

POD_SUBNET="10.244.0.0/16"​

# 注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释:​

所以更改的时候请确保这个步骤的这个网段没有被统一替换掉,如果被替换掉了,还请改回来:

sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml​

kubectl apply -f calico-etcd.yaml​

查看容器状态

[root@k8s-master01 calico]# kubectl get po -n kube-system​

如果容器状态异常可以使用kubectl describe 或者logs查看容器的日志​

安装CoreDNS

安装对应版本(推荐)

cd /root/k8s-ha-install/​

如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP​

sed -i "s#10.96.0.10#10.96.0.10#g" CoreDNS/coredns.yaml​

安装coredns

[root@k8s-master01 k8s-ha-install]# kubectl create -f CoreDNS/coredns.yaml ​

serviceaccount/coredns created​

clusterrole.rbac.authorization.k8s.io/system:coredns created​

clusterrolebinding.rbac.authorization.k8s.io/system:coredns created​

configmap/coredns created​

deployment.apps/coredns created​

service/kube-dns created​

安装最新版CoreDNS

git clone ​​deployment/kubernetes​

# ./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -​

serviceaccount/coredns created​

clusterrole.rbac.authorization.k8s.io/system:coredns created​

clusterrolebinding.rbac.authorization.k8s.io/system:coredns created​

configmap/coredns created​

deployment.apps/coredns created​

service/kube-dns created​

查看状态​

# kubectl get po -n kube-system -l k8s-app=kube-dns​

NAME READY STATUS RESTARTS AGE​

coredns-85b4878f78-h29kh 1/1 Running 0 8h​

安装Metrics Server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

安装metrics server​

cd /root/k8s-ha-install/metrics-server-0.4.x/​

kubectl create -f . ​

serviceaccount/metrics-server created​

clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created​

clusterrole.rbac.authorization.k8s.io/system:metrics-server created​

rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created​

clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created​

clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created​

service/metrics-server created​

deployment.apps/metrics-server created​

apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created​

等待metrics server启动然后查看状态

[root@k8s-master01 metrics-server-0.4.x]# kubectl top node​

NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ​

k8s-master01 231m 5% 1620Mi 42% ​

k8s-master02 274m 6% 1203Mi 31% ​

k8s-master03 202m 5% 1251Mi 32% ​

k8s-node01 69m 1% 667Mi 17% ​

k8s-node02 73m 1% 650Mi 16%​

集群验证

集群验证请参考视频的集群验证,必须要做!!!​

安装busybox​

cat<

apiVersion: v1​

kind: Pod​

metadata:​

name: busybox​

namespace: default​

spec:​

containers:​

- name: busybox​

image: busybox:1.28​

command:​

- sleep​

- "3600"​

imagePullPolicy: IfNotPresent​

restartPolicy: Always​

EOF​

Pod必须能解析Service​Pod必须能解析跨namespace的Service​每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53​Pod和Pod之前要能通

同namespace能通信​跨namespace能通信​跨机器能通信​

验证解析(请参考视频集群验证)​

[root@k8s-master01 CoreDNS]# kubectl exec busybox -n default -- nslookup kubernetes​

Server: 192.168.0.10​

Address 1: 192.168.0.10 kube-dns.kube-system.svc.cluster.local​

Name: kubernetes​

Address 1: 192.168.0.1 kubernetes.default.svc.cluster.local​

[root@k8s-master01 CoreDNS]# kubectl exec busybox -n default -- nslookup kube-dns.kube-system​

Server: 192.168.0.10​

Address 1: 192.168.0.10 kube-dns.kube-system.svc.cluster.local​

Name: kube-dns.kube-system​

Address 1: 192.168.0.10 kube-dns.kube-system.svc.cluster.local​

安装dashboard

Dashboard部署

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

安装指定版本dashboard

cd /root/k8s-ha-install/dashboard/​

[root@k8s-master01 dashboard]# kubectl create -f .​

serviceaccount/admin-user created​

clusterrolebinding.rbac.authorization.k8s.io/admin-user created​

namespace/kubernetes-dashboard created​

serviceaccount/kubernetes-dashboard created​

service/kubernetes-dashboard created​

secret/kubernetes-dashboard-certs created​

secret/kubernetes-dashboard-csrf created​

secret/kubernetes-dashboard-key-holder created​

configmap/kubernetes-dashboard-settings created​

role.rbac.authorization.k8s.io/kubernetes-dashboard created​

clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created​

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created​

clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created​

deployment.apps/kubernetes-dashboard created​

service/dashboard-metrics-scraper created​

deployment.apps/dashboard-metrics-scraper created​

安装最新版

官方GitHub地址:​​apply -f ​​admin.yaml​

apiVersion: v1​

kind: ServiceAccount​

metadata:​

name: admin-user​

namespace: kube-system​

---​

apiVersion: rbac.authorization.k8s.io/v1​

kind: ClusterRoleBinding ​

metadata: ​

name: admin-user​

annotations:​

rbac.authorization.kubernetes.io/autoupdate: "true"​

roleRef:​

apiGroup: rbac.authorization.k8s.io​

kind: ClusterRole​

name: cluster-admin​

subjects:​

- kind: ServiceAccount​

name: admin-user​

namespace: kube-system​

kubectl apply -f admin.yaml -n kube-system

登录dashboard

在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,参考图1-1:

--test-type --ignore-certificate-errors​

图1-1 谷歌浏览器 Chrome的配置​

更改dashboard的svc为NodePort:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

将ClusterIP更改为NodePort(如果已经为NodePort忽略此步骤):

查看端口号:

根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard:

访问Dashboard:​​ Dashboard登录方式​

查看token值:

[root@k8s-master01 1.1.1]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')​

Name: admin-user-token-r4vcp​

Namespace: kube-system​

Labels:

Annotations: kubernetes.io/service-account.name: admin-user​

kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023​

Type: kubernetes.io/service-account-token​

Data​

====​

ca.crt: 1025 bytes​

namespace: 11 bytes​

token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w​

将token值输入到令牌后,单击登录即可访问Dashboard,参考图1-3:

图1-3 Dashboard页面​

生产环境关键性配置

关键性配置请参考视频,不要直接配置!​

vim /etc/docker/daemon.json​

{​ "registry-mirrors": [​

"["native.cgroupdriver=systemd"],​

"max-concurrent-downloads": 10,​ "max-concurrent-uploads": 5,​ "log-opts": {​ "max-size": "300m",​ "max-file": "2"​ },​ "live-restore": true​}​ vim /usr/lib/systemd/system/kube-controller-manager.service​

# --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true \​

--cluster-signing-duration=876000h0m0s \​

vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf ​

[Service]​

Envirnotallow="KUBELET_KUBECONFIG_ARGS=--kubecnotallow=/etc/kubernetes/kubelet.kubeconfig --bootstrap-kubecnotallow=/etc/kubernetes/bootstrap-kubelet.kubeconfig"​

Envirnotallow="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"​

Envirnotallow="KUBELET_CONFIG_ARGS=--cnotallow=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"​

Envirnotallow="KUBELET_EXTRA_ARGS=--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --image-pull-progress-deadline=30m"​

ExecStart=​

ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS​

关键性配置请参考视频,不要直接配置,可能会造成集群故障!!​

vim /etc/kubernetes/kubelet-conf.yml​

添加如下配置​

rotateServerCertificates: true​

allowedUnsafeSysctls:​

- "net.core*"​

- "net.ipv4.*"​

kubeReserved:​

cpu: "1"​

memory: 1Gi​

ephemeral-storage: 10Gi​

systemReserved:​

cpu: "1"​

memory: 1Gi​

ephemeral-storage: 10Gi​

安装总结:​

kubeadm​二进制​自动化安装

Ansible

Master节点安装不需要写自动化。​添加Node节点,playbook。​

上面的细节配置​生产环境中etcd一定要和系统盘分开,一定要用ssd硬盘。​Docker数据盘也要和系统盘分开,有条件的话可以使用ssd硬盘​

安装需要注意的细节​

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:PR人:颠覆微信恐无望,但抖快把社交做成了「生意」!
下一篇:ETCD简介及使用
相关文章

 发表评论

暂时没有评论,来抢沙发吧~