linux cpu占用率如何看
244
2022-09-12
高可用k8s集群搭建1.17.0
高可用k8s集群搭建1.17.0(ipvs网络)
借鉴文档:高可用
官网介绍
https://kubernetes.io/zh/docs/admin/high-availability/#%E5%A4%8D%E5%88%B6%E7%9A%84API%E6%9C%8D%E5%8A%A1
借鉴文档:和 ssh-keygen# 将id_rsa.pub内容复制到其他机器的~/.ssh/authorized_keys中
设置hosts
(设置永久主机名称,然后重新登录)# 设置hostsname主机名# hostnamectl set-hostname kube-master# cat /etc/hostskube-node1 10.2.33.5 nginx.btcexa.com test.btcexa.com k8s.grafana.btcexa.comkube-node2 10.2.33.127 nginx.btcexa.com test.btcexa.comkube-node3 10.2.33.65 nginx.btcexa.com test.btcexa.com10.2.33.5 nginx.btcexa.com test.btcexa.com test-ning.btcexa.com k8s.grafana.btcexa.com k8s.prometheus.btcexa.com traefik-admin.btcexa.com traefik-nginx.btcexa.com
内核配置
升级CentOS软件包及内核
yum -y updateyum -y install yum-plugin-fastestmirroryum install -y epel-releaserpm --import -Uvh -y --enablerepo=elrepo-kernel install kernel-ml
设置默认启动内核为最新安装版本
grub2-set-default 0grub2-mkconfig -o /boot/grub2/grub.cfg
设置关闭防火墙及SELINUX
systemctl stop firewalld && systemctl disable firewalldsed -i "s/SELINUX=.*/SELINUX=disabled/g" /etc/selinux/configsetenforce 0
关闭Swap
swapoff -ased -i '/^.*swap.*/d' /etc/fstab
系统内核配置
- 修改iptables的内核参数# modprobe overlay# modprobe br_netfilter# Setup required sysctl params, these persist across reboots.cat > /etc/sysctl.d/99-kubernetes-cri.conf <
设置limits.conf
cat >> /etc/security/limits.conf << EOF * soft nproc 1024000 * hard nproc 1024000 * soft nofile 1024000 * hard nofile 1024000 * soft core 1024000 * hard core 1024000 ######big mem ######## #* hard memlock unlimited #* soft memlock unlimitedEOF
设置20-nproc.conf
sed -i 's/4096/1024000/' /etc/security/limits.d/20-nproc.conf
设置 journal 日志大小及存储路径
echo SystemMaxUse=600M >>/etc/systemd/journald.confmkdir -p /var/log/journalchown root:systemd-journal /var/log/journalchmod 2755 /var/log/journalsystemctl restart systemd-journald
开启ipvs (kube-proxy)
安装依赖命令行(工具集)
# yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp socat fuse fuse-libs nfs-utils nfs-utils-lib pciutils ebtables ethtool
临时生效ipvs
modprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4
永久生效ipvs
cat > /etc/sysconfig/modules/ipvs.modules < 查看ipvs是否成功 # lsmod|grep ip_vsip_vs_sh 12688 0ip_vs_wrr 12697 0ip_vs_rr 12600 0ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 133095 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack# 确认br_netfilter模块# lsmod | grep br_netfilter# 启用此内核模块,以便遍历桥的数据包由iptables进行处理以进行过滤和端口转发,并且群集中的kubernetes可以相互通信modprobe br_netfilter# 若kube-proxy需要开启ipvs,则下述模块需要存在ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrack_ipv4可选:若kube-proxy需要开启ipvs,则下述模块需要存在, 在所有的Kubernetes节点 kubernetes 安装 安装全局环境变量 # mkdir -p /opt/k8s/{bin,ssl,cfg}- 生成apiserver的token文件# date|sha1sum |awk '{print $1}'b681138df1a8e0c2ddb8daff35490435caa5ff7a# cd /opt/k8s/ssl# cat > /opt/k8s/ssl/token.csv < # vim /opt/k8s/env.shexport BOOTSTRAP_TOKEN=b681138df1a8e0c2ddb8daff35490435caa5ff7a# 最好使用 当前未用的网段 来定义服务网段和 Pod 网段# 服务网段,部署前路由不可达,部署后集群内路由可达(kube-proxy 和 ipvs 保证)SERVICE_CIDR="10.254.0.0/16"# Pod 网段,建议 /16 段地址,部署前路由不可达,部署后集群内路由可达(flanneld 保证)CLUSTER_CIDR="10.10.0.0/16"# 服务端口范围 (NodePort Range)export NODE_PORT_RANGE="30000-50000"# 集群各机器 IP 数组export NODE_IPS=(10.2.33.5 10.2.33.127 10.2.33.65)# 集群各 IP 对应的 主机名数组export NODE_NAMES=(kube-node1 kube-node2 kube-node3)# kube-apiserver 节点 IPexport MASTER_IP=0.0.0.0# 内网访问kube-apiserver 地址export KUBE_APISERVER="外网访问kube-apiserver 地址export KUBE_PUBLIC_APISERVER="etcd 集群服务地址列表export ETCD_ENDPOINTS="flanneld 网络配置前缀export FLANNEL_ETCD_PREFIX="/kubernetes/network"# kubernetes 服务 IP (一般是 SERVICE_CIDR 中第一个IP)export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"# 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)export CLUSTER_DNS_SVC_IP="10.254.0.2"# 集群 DNS 域名export CLUSTER_DNS_DOMAIN="cluster.local." 安装cfssl工具,用具签发证书。 主节点安装就可以 cd /opt/k8s/wget +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64mv cfssl_linux-amd64 /usr/bin/cfsslmv cfssljson_linux-amd64 /usr/bin/cfssljsonmv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo 安装docker,并配置docker的镜像仓库 # 安装DOCKERyum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager \ --add-repo \ install docker-ce -y# 设置docker镜像仓库curl -sSL | sh -s 重启docker(systemctl daemon-reloadsystemctl restart dockersystemctl enable docker) 生成etcd证书 创建一个存入证书的目录,并且进入该目录。 cd /opt/k8s/ssl 创建生成ca证书的json文件,内容如下。expiry设置长一点,要不然证书失效很麻烦。 cat > ca-config.json < 创建证书签名请求的json文件 cat > ca-csr.json < 生成CA证书(ca.pem)和密钥(ca-key.pem) # cfssl gencert -initca ca-csr.json | cfssljson -bare ca -2019/12/26 09:33:53 [INFO] generating a new CA key and certificate from CSR2019/12/26 09:33:53 [INFO] generate received request2019/12/26 09:33:53 [INFO] received CSR2019/12/26 09:33:53 [INFO] generating key: rsa-20482019/12/26 09:33:53 [INFO] encoded CSR2019/12/26 09:33:53 [INFO] signed certificate with serial number 76090837348387020865481584188520719234232827929- 生成结果如下 ls ./ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem 为etcd生成证书 cat > etcd-csr.json < 生成 cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem -config=./ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd2019/12/26 09:34:26 [INFO] generate received request2019/12/26 09:34:26 [INFO] received CSR2019/12/26 09:34:26 [INFO] generating key: rsa-20482019/12/26 09:34:26 [INFO] encoded CSR2019/12/26 09:34:26 [INFO] signed certificate with serial number 6808728292621737823202446470988184027876475865342019/12/26 09:34:26 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (section 10.2.3 ("Information Requirements"). 安装etcd 下载并解压,然后复制到安装目录下。 官方网址:cd /opt/k8s && wget tar xf etcd-v3.3.13-linux-amd64.tar.gz 用以下etcd安装脚本,生成etcd的配置文件和 systemd 的服务配置,内容如下。 # vim init-etcd.sh#!/bin/bashsource /opt/env.shETCD_NAME=$1ETCD_IP=$2ETCD_CLUSTER=$3WORK_DIR=/opt/etcdcat < 部署安装 etcd vim etcd_install.sh#!/bin/bashcp -avr /opt/k8s/env.sh /opt/env.shsource /opt/env.shfor node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}"#####etcd # 创建etcd目录 ssh root@${node_ip} "mkdir -p /opt/etcd/{cfg,bin,ssl}" # 拷贝执行程序 scp /opt/k8s/etcd-v3.3.13-linux-amd64/{etcd,etcdctl} root@${node_ip}:/opt/etcd/bin/ scp /opt/k8s/env.sh root@${node_ip}:/opt/ # 拷贝配置文件生成脚本 scp /opt/k8s/init-etcd.sh root@${node_ip}:/opt/ # 拷贝证书 cd /opt/k8s/ssl/ scp etcd*pem ca*.pem root@${node_ip}:/opt/etcd/ssl/ ##### donessh root@10.2.33.5 "cd /opt/ && sh init-etcd.sh etcd01 10.2.33.5 etcd01=root@10.2.33.127 "cd /opt && sh init-etcd.sh etcd02 10.2.33.127 etcd01=root@10.2.33.65 "cd /opt/ && sh init-etcd.sh etcd03 10.2.33.65 etcd01=sh etcd_install.sh 先在主节点启动etcd,然后终端会被占用。需要到另外两个节点也启动etcd后,主节点的终端才可以释放。 三台节点都执行 systemctl start etcd systemctl daemon-reload && systemctl enable etcd && systemctl start etcd# 测试, 三台etcd都启动以后测试。正确输出如下。/opt/etcd/bin/etcdctl --endpoints=--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/etcd.pem --key-file=/opt/etcd/ssl/etcd-key.pem cluster-healthmember 255b6ed818720e20 is healthy: got healthy result from cbc6185ed5ac53ae is healthy: got healthy result from ccdbf5bbe09e862d is healthy: got healthy result from is healthy 安装kubernetes Master节点安装 生成Master 证书 生成apiserver的证书 /opt/k8s/ssl # 在生成etcd的证书时也用的这套cacd /opt/k8s/sslcat > /opt/k8s/ssl/kubernetes-csr.json < 生成kubectl证书 cd /opt/k8s/ssl# cat > /opt/k8s/ssl/admin-csr.json < 下载文件,解压,复制文件 下载地址:/opt/k8s && wget /opt/k8s && wget -xf kubernetes-server-linux-amd64.tar.gzmkdir -p /opt/kubernetes/{cfg,bin,ssl}- 复制执行程序文件到安装目录下cd /opt/k8s/kubernetes/server/bin/\cp -avr kube-apiserver kube-controller-manager kube-scheduler kubectl /opt/kubernetes/bin/- 复制执行程序文件到高可用master安装目录下scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@10.2.33.127:/opt/kubernetes/bin/- 复制证书文件到kubernetes的ssl目录下cd /opt/k8s/ssl\cp -avr kubernetes*pem ca*pem adm* token.csv token.csv /opt/kubernetes/ssl/scp kubernetes*pem ca*pem adm* token.csv root@10.2.33.127:/opt/kubernetes/ssl/ 安装 apiserver 执行以下脚本安装apiserver cd /opt/k8svim install-apiserver.sh#!/bin/bashsource /opt/k8s/env.sh#MASTER_ADDRESS=${1:-"10.2.33.5"}#ETCD_SERVERS=${2:-"< 安装controller-manager 使用下面的安装脚本安装 # vim install-controller-manager.sh#!/bin/bashsource /opt/k8s/env.shMASTER_ADDRESS=${1:-"127.0.0.1"}cat < 安装scheduler 使用以下脚本安装 kube-scheduler 服务 # vim install_kube-scheduler.sh#!/bin/bash#MASTER_ADDRESS=${1:-"127.0.0.1"}cat < 启动Master 节点程序 (systemctl daemon-reloadsystemctl enable kube-apiserver && systemctl restart kube-apiserver && systemctl status kube-apiserversystemctl enable kube-controller-manager && systemctl restart kube-controller-manager && systemctl status kube-controller-managersystemctl enable kube-scheduler && systemctl restart kube-scheduler && systemctl status kube-scheduler) (systemctl status kube-apiserversystemctl status kube-controller-managersystemctl status kube-scheduler) 安装kubectl 1)使用以下脚本安装 kubectl服务(生成内网kubeconfig) 内网IP: awsDns(kubernetes.exa.local)---> aws(内网alb)TCP模式 --> 目标组TCP模式--->k8sMaster节点(6443端口) cat > /opt/k8s/kubectl_private_install.sh << EOF# 获取环境变量source /opt/k8s/env.sh# 设置apiserver访问地址#KUBE_APISERVER='设置集群参数/opt/kubernetes/bin/kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=\${KUBE_APISERVER} --kubeconfig=admin_private.kubeconfig# 设置客户端认证参数/opt/kubernetes/bin/kubectl config set-credentials admin --client-certificate=/opt/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/opt/kubernetes/ssl/admin-key.pem --kubeconfig=admin_private.kubeconfig# 设置上下文件参数# /opt/kubernetes/bin/kubectl config set-context kubernetes --cluster=kubernetes --user=admin --namespace=kube-system --kubeconfig=admin_private.kubeconfig/opt/kubernetes/bin/kubectl config set-context kubernetes --cluster=kubernetes --user=admin --namespace=default --kubeconfig=admin_private.kubeconfig# 设置默认上下文/opt/kubernetes/bin/kubectl config use-context kubernetes --kubeconfig=admin_private.kubeconfigEOF配置kubectl服务(只需要在一遍即可(在一个master节点上就ok...))# sh /opt/k8s/kubectl_private_install.shCluster "kubernetes" set.User "admin" set.Context "kubernetes" created.Switched to context "kubernetes".#将admin_private.kubeconfig拷贝到/root/.kube/config文件cp /opt/k8s/admin_private.kubeconfig /root/.kube/config 2)使用以下脚本安装 kubectl服务(生成外网kubeconfig) 外放访问:aws(Dns)---> aws(外网alb)TCP模式 --> 目标组TCP模式--->k8sMaster节点(6443端口) cat > /opt/k8s/kubectl_public_install.sh << EOF# 获取环境变量#source /opt/k8s/env.sh# 设置apiserver访问地址KUBE_APISERVER='设置集群参数/opt/kubernetes/bin/kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=\${KUBE_APISERVER} --kubeconfig=admin_public.kubeconfig# 设置客户端认证参数/opt/kubernetes/bin/kubectl config set-credentials admin --client-certificate=/opt/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/opt/kubernetes/ssl/admin-key.pem --kubeconfig=admin_public.kubeconfig# 设置上下文件参数# /opt/kubernetes/bin/kubectl config set-context kubernetes --cluster=kubernetes --user=admin --namespace=kube-system --kubeconfig=admin_public.kubeconfig/opt/kubernetes/bin/kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=admin_public.kubeconfig# 设置默认上下文/opt/kubernetes/bin/kubectl config use-context kubernetes --kubeconfig=admin_public.kubeconfigEOF配置kubectl服务(只需要在一遍即可(在一个master节点上就ok...))# sh /opt/k8s/kubectl_public_install.shCluster "kubernetes" set.User "admin" set.Context "kubernetes" created.Switched to context "kubernetes".#将admin_public.kubeconfig拷贝到/root/.kube/config文件cp /opt/k8s/admin_public.kubeconfig /root/.kube/config# 如果需要通过公网管理集群(测试下来公网操作慢!!)scp /opt/k8s/admin_public.kubeconfig root@10.2.33.127:/root/.kube/config 所有master节点添加环境变量 - 把kubernetes命令添加到环境变量(所有master节点上)cat > /etc/profile.d/k8s.sh < 使用kubectl命令检查多master是否安装成功,在每个master节点上都执行,检查是否正常。 # kubectl get cs //(unknown问题暂时未解决1.16.0和1.16.4都有这样问题,1.17.0没有问题赞)NAME STATUS MESSAGE ERRORscheduler Healthy okcontroller-manager Healthy oketcd-1 Healthy {"health":"true"}etcd-2 Healthy {"health":"true"}etcd-0 Healthy {"health":"true"}# kubectl cluster-infoKubernetes master is running at further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. 安装node节点、添加node节点(机器初始化docker安装等) 安装 kubelet、kube-porxy、flannel插件 生成kube-proxy证书 cat > /opt/k8s/ssl/kube-proxy-csr.json << EOF{ "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "k8s", "OU": "System" } ]}EOF# 生成证书# cd /opt/k8s/ssl/# cfssl gencert -ca=/opt/k8s/ssl/ca.pem -ca-key=/opt/k8s/ssl/ca-key.pem -config=/opt/k8s/ssl/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy2019/12/26 09:59:43 [INFO] generate received request2019/12/26 09:59:43 [INFO] received CSR2019/12/26 09:59:43 [INFO] generating key: rsa-20482019/12/26 09:59:43 [INFO] encoded CSR2019/12/26 09:59:43 [INFO] signed certificate with serial number 1570280176936359726427733753087917168231037485132019/12/26 09:59:43 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (section 10.2.3 ("Information Requirements"). 生成flannel证书 cat > flanneld-csr.json < 创建角色绑定 # kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrapclusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created 在master节点上,生成bootstrap.kubeconfig kube-proxy.kubeconfig 证书。 cd /opt/k8s/ vim gen-kubeconfig.sh # 读取环境变量source /opt/k8s/env.sh#---------创建kubelet bootstrapping kubeconfig------------#BOOTSTRAP_TOKEN=c76835f029914e3693a9834295bb840910211916 # 要与/opt/kubernetes/ssl/token.csv一致# 设置集群参数kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig# 设置客户端认证参数kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig# 设置上下文参数kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig# 设置默认上下文kubectl config use-context default --kubeconfig=bootstrap.kubeconfig#---------创建kubelet bootstrapping kubeconfig------------- # 设置集群参数kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig # 设置客户端认证参数kubectl config set-credentials kube-proxy \ --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \ --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig# 设置上下文参数kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig# 设置默认上下文kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig 生成bootstrap.kubeconfig kube-proxy.kubeconfig 证书 # cp /opt/k8s/ssl/kube-proxy*.pem /opt/kubernetes/ssl/# cd /opt/k8s/ && sh gen-kubeconfig.sh 下载flannel安装包 下载站点版本:cd /opt/k8s/ && wget tar xf flannel-v0.11.0-linux-amd64.tar.gz 然后到/opt/etcd/ssl执行(Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段:) source /opt/k8s/env.sh/opt/etcd/bin/etcdctl \ --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/opt/k8s/ssl/ca.pem \ --cert-file=/opt/k8s/ssl/flanneld.pem \ --key-file=/opt/k8s/ssl/flanneld-key.pem \ set ${FLANNEL_ETCD_PREFIX}/config '{"Network":"'${CLUSTER_CIDR}'", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}' **如果在flanneld启动的时候不指定${FLANNEL_ETCD_PREFIX} 将默认用的key为/coreos.com/network/config flanneld安装服务及配置 vim /opt/k8s/install_flanneld.shsource /opt/k8s/env.sh# 编辑flanneld配置文件,内容如下cat > /opt/k8s/cfg/flanneld << EOFFLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} -etcd-prefix=${FLANNEL_ETCD_PREFIX} -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/flanneld.pem -etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"EOF# flanneld的service文件cat > /opt/k8s/cfg/flanneld.service < 生成flannel配置服务 sh install_flanneld.sh kubelet安装服务及配置 # kubelet优化预留资源# vim install-kubelet.sh#!/bin/bashsource /opt/env.shNODE_ADDRESS=$1#DNS_SERVER_IP=${2:-"10.254.0.2"}cat < kuber-proxy 安装服务及配置 # vim install-kube-proxy.sh#!/bin/bashsource /opt/env.shNODE_ADDRESS=$1cat < 复制相关文件到主从节点的bin目录。这些文件来源于Kubernetes包和flannel包。 每个节点的bin如下 # vim install_node.sh#!/bin/bashsource /opt/k8s/env.shfor node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}"#####etcd # 创建kubernetes目录 ssh root@${node_ip} "mkdir -p /opt/kubernetes/{cfg,bin,ssl}" #------------------------------------------------------------ ###复制flanneld cd /opt/k8s/ssl/ scp -p flanneld*.pem root@${node_ip}:/opt/kubernetes/ssl/ scp -p /opt/k8s/cfg/flanneld root@${node_ip}:/opt/kubernetes/cfg/ # 复制flanneld执行程序 cd /opt/k8s scp -p flanneld mk-docker-opts.sh root@${node_ip}:/opt/kubernetes/bin/ # 启动flanneld cd /opt/k8s/cfg/ scp -p flanneld.service docker.service root@${node_ip}:/usr/lib/systemd/system/ # ssh root@${node_ip} "systemctl daemon-reload && systemctl restart flanneld && systemctl restart docker" #----------------------------------------------------------- ###复制kubelet kubelet-proxy cd /opt/k8s/ scp -p bootstrap.kubeconfig kube-proxy.kubeconfig root@${node_ip}:/opt/kubernetes/cfg/ cd /opt/k8s/ssl/ scp -p ca.pem kube-proxy*.pem root@${node_ip}:/opt/kubernetes/ssl/ # 复制kubelet kube-proxy执行程序 cd /opt/k8s/kubernetes/server/bin/ scp -p kubelet kube-proxy root@${node_ip}:/opt/kubernetes/bin/ # 复制安装配置文件 cd /opt/k8s scp -p env.sh install-kubelet.sh install-kube-proxy.sh root@${node_ip}:/opt/ done # node1节点 ssh root@10.2.33.5 "sh /opt/install-kubelet.sh 10.2.33.5" ssh root@10.2.33.5 "sh /opt/install-kube-proxy.sh 10.2.33.5" # node2节点 ssh root@10.2.33.127 "sh /opt/install-kubelet.sh 10.2.33.127" ssh root@10.2.33.127 "sh /opt/install-kube-proxy.sh 10.2.33.127" # node3节点 ssh root@10.2.33.65 "sh /opt/install-kubelet.sh 10.2.33.65" ssh root@10.2.33.65 "sh /opt/install-kube-proxy.sh 10.2.33.65" 启动node节点 # sh install_node.sh# vim start_node.shsource /opt/k8s/env.shfor node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"done# sh start_node.sh 在主节点上查看是否有node节点加入集群请求 # kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-Cw58qBpbo91wRpOGa81fFP5KfnqiRGVMKzuSMcfbH4A 9s kubelet-bootstrap Pendingnode-csr-NUUmAMLXGjyQUxv0tvn1zONDbMU1gkgJz_9t8CR28oI 7s kubelet-bootstrap Pendingnode-csr-q0s0lu-XbtWNg02MonWgISrulUScob12S7il-HR5-YU 6s kubelet-bootstrap Pending 在master节点上批准请求 # kubectl get csr|grep 'Pending' | awk '{print $1}'| xargs kubectl certificate approvecertificatesigningrequest.certificates.k8s.io/node-csr-Cw58qBpbo91wRpOGa81fFP5KfnqiRGVMKzuSMcfbH4A approvedcertificatesigningrequest.certificates.k8s.io/node-csr-NUUmAMLXGjyQUxv0tvn1zONDbMU1gkgJz_9t8CR28oI approvedcertificatesigningrequest.certificates.k8s.io/node-csr-q0s0lu-XbtWNg02MonWgISrulUScob12S7il-HR5-YU approved 在主节点上查看通过的node # kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-Cw58qBpbo91wRpOGa81fFP5KfnqiRGVMKzuSMcfbH4A 23s kubelet-bootstrap Approved,Issuednode-csr-NUUmAMLXGjyQUxv0tvn1zONDbMU1gkgJz_9t8CR28oI 21s kubelet-bootstrap Approved,Issuednode-csr-q0s0lu-XbtWNg02MonWgISrulUScob12S7il-HR5-YU 20s kubelet-bootstrap Approved,Issued 查看node状态 2.33.65 10.2.33.5 10.2.33.127 # kubectl get nodeNAME STATUS ROLES AGE VERSION10.2.33.127 NotReady 检查集群组建状态 在三台节点上检查kube-proxy是否启动systemctl status kube-proxysystemctl status kube-apiserversystemctl status kube-controller-managersystemctl status kube-schedulersystemctl status etcdsystemctl status flanneldsystemctl status dockersystemctl status kubeletsystemctl start kube-apiserversystemctl start kube-controller-managersystemctl start kube-schedulersystemctl start kubecsystemctl start etcdsystemctl start flanneldsystemctl restart dockersystemctl status kubeletsystemctl start kubeletsystemctl stop etcdsystemctl stop kube-apiserversystemctl stop kube-controller-managersystemctl stop kube-schedulersystemctl stop flanneldsystemctl stop dockersystemctl stop kubeletsystemctl stop kube-proxy 查看端点信息: # kubectl get endpointsNAME ENDPOINTS AGEkubernetes 10.2.33.127:6443,10.2.33.5:6443 22m 检测脚本: # vim check_flanneld.sh#!bin/bashsource /opt/k8s/env.sh/opt/etcd/bin/etcdctl --endpoints=${ETCD_ENDPOINTS} \--ca-file=/opt/etcd/ssl/ca.pem \--cert-file=/opt/etcd/ssl/etcd.pem \--key-file=/opt/etcd/ssl/etcd-key.pem \get ${FLANNEL_ETCD_PREFIX}/config/opt/etcd/bin/etcdctl --endpoints=${ETCD_ENDPOINTS} \--ca-file=/opt/etcd/ssl/ca.pem \--cert-file=/opt/etcd/ssl/etcd.pem \--key-file=/opt/etcd/ssl/etcd-key.pem \ls ${FLANNEL_ETCD_PREFIX}/subnets# sh check_flanneld.sh输出:{"Network":"10.10.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}/kubernetes/network/subnets/10.10.63.0-24/kubernetes/network/subnets/10.10.69.0-24/kubernetes/network/subnets/10.10.40.0-24 查看子网信息 # source /opt/k8s/env.sh# /opt/etcd/bin/etcdctl --endpoints=${ETCD_ENDPOINTS} \--ca-file=/opt/etcd/ssl/ca.pem \--cert-file=/opt/etcd/ssl/etcd.pem \--key-file=/opt/etcd/ssl/etcd-key.pem \get ${FLANNEL_ETCD_PREFIX}/subnets/10.10.63.0-24输出{"PublicIP":"10.2.33.65","BackendType":"vxlan","BackendData":{"VtepMAC":"92:4f:b7:1d:24:ef"}} 测试命令:# ssh 10.2.33.65 "ip addr show flannel.1| grep -w inet" inet 10.10.63.0/32 scope global flannel.1# ssh 10.2.33.65 "ping -c 1 10.10.63.0"PING 10.10.63.0 (10.10.63.0) 56(84) bytes of data.64 bytes from 10.10.63.0: icmp_seq=1 ttl=64 time=0.062 ms# telnet 10.2.33.65 22Trying 10.2.33.65...Connected to 10.2.33.65.Escape character is '^]'.SSH-2.0-OpenSSH_7.4 然后查看每个节点上的Ip配置信息,可以测试用ssh连接其它节点flannel网卡的ip。 如果连接上说明flannel配置成功。 coredns 两种安装方法: 方法一 安装包自带包# tar xf kubernetes-server-linux-amd64.tar.gz # cd /opt/k8s/kubernetes# tar xf kubernetes-src.tar.gz# cd /opt/k8s/kubernetes/cluster/addons/dns/coredns# cp coredns.yaml.base coredns.yaml修改配置文件# diff coredns.yaml.base coredns.yaml68c68< kubernetes __PILLAR__DNS__DOMAIN__ in-addr.arpa ip6.arpa {---> kubernetes 10.254.0.0/16 cluster.local. in-addr.arpa ip6.arpa {95a96> replicas: 2118a120> #image: k8s.gcr.io/coredns:1.6.2189c191< clusterIP: __PILLAR__DNS__SERVER__---> clusterIP: 10.254.0.2 #ClusterDNS地址 方法二 # cd /opt/k8s (镜像image: coredns/coredns:1.6.5)# git clone into 'deployment'...remote: Enumerating objects: 1, done.remote: Counting objects: 100% (1/1), done.remote: Total 402 (delta 0), reused 0 (delta 0), pack-reused 401Receiving objects: 100% (402/402), 117.14 KiB | 122.00 KiB/s, done.Resolving deltas: 100% (191/191), done.# mv deployment/ coredns/# cd /opt/k8s/coredns/kubernetes# yum -y install jq conntrack-tools# ./deploy.sh -s -r 10.254.0.0/16 -i 10.254.0.2 -d cluster.local > coredns.yaml# diff coredns.yaml.sed coredns.yaml61c61< kubernetes CLUSTER_DOMAIN REVERSE_CIDRS {---> kubernetes cluster.local 10.254.0.0/16 {63c63< }FEDERATIONS---> }65c65< forward . UPSTREAMNAMESERVER---> forward . /etc/resolv.conf70c70< }STUBDOMAINS---> }183c183< clusterIP: CLUSTER_DNS_IP---> clusterIP: 10.254.0.2# kubectl create -f coredns.yamlserviceaccount/coredns createdclusterrole.rbac.authorization.k8s.io/system:coredns createdclusterrolebinding.rbac.authorization.k8s.io/system:coredns createdconfigmap/coredns createddeployment.apps/coredns createdservice/kube-dns created# cd /opt/k8s/# kubectl get svc -n kube-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkube-dns ClusterIP 10.254.0.2 检查验证 要查看日志 # kubectl logs -f coredns-7b5fbb568b-xqjck -n kube-system.:53[INFO] plugin/reload: Running configuration MD5 = 1ee2e9685eedeba796e481c372ac7de4CoreDNS-1.6.6linux/amd64, go1.13.5, 6a7a75e 首先创建个pod nginx # cat > my-nginx.yaml< 查看ipvs集群 [root@ip-10-2-33-5 ~]# ipvsadm -LnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 10.254.0.1:443 rr -> 10.2.33.5:6443 Masq 1 0 0 -> 10.2.33.127:6443 Masq 1 1 0TCP 10.254.0.2:53 rr -> 10.10.69.2:53 Masq 1 0 0TCP 10.254.0.2:9153 rr -> 10.10.69.2:9153 Masq 1 0 0TCP 10.254.13.183:80 rr -> 10.10.40.2:80 Masq 1 0 0 -> 10.10.63.2:80 Masq 1 0 0UDP 10.254.0.2:53 rr -> 10.10.69.2:53 Masq 1 0 8 # journalctl -u kube-proxy.service-- Logs begin at Thu 2019-12-26 09:04:00 UTC, end at Thu 2019-12-26 10:30:39 UTC. --Dec 26 10:06:24 ip-10-2-33-5.ec2.internal kube-proxy[2850]: I1226 10:06:24.978700 2850 flags.go:33] FLAG: --cleanup-ipvs="true"Dec 26 10:06:24 ip-10-2-33-5.ec2.internal kube-proxy[2850]: I1226 10:06:24.978707 2850 flags.go:33] FLAG: --cluster-cidr="10.254.0.0/16" kubernetes 1.17.0版本支持新接口 # kubectl api-versionsadmissionregistration.k8s.io/v1admissionregistration.k8s.io/v1beta1apiextensions.k8s.io/v1apiextensions.k8s.io/v1beta1apiregistration.k8s.io/v1apiregistration.k8s.io/v1beta1apps/v1authentication.k8s.io/v1authentication.k8s.io/v1beta1authorization.k8s.io/v1authorization.k8s.io/v1beta1autoscaling/v1autoscaling/v2beta1autoscaling/v2beta2batch/v1batch/v1beta1certificates.k8s.io/v1beta1coordination.k8s.io/v1coordination.k8s.io/v1beta1discovery.k8s.io/v1beta1events.k8s.io/v1beta1extensions/v1beta1networking.k8s.io/v1networking.k8s.io/v1beta1node.k8s.io/v1beta1policy/v1beta1rbac.authorization.k8s.io/v1rbac.authorization.k8s.io/v1beta1scheduling.k8s.io/v1scheduling.k8s.io/v1beta1storage.k8s.io/v1storage.k8s.io/v1beta1v1 Harbor接入kubernetes 创建k8s令牌 在master节点上要登录到harbor中# docker login -u k8s-btcexa -p 'Blockshine123' harbor.btcexa.comLogin Succeeded认证信息自动保存到 ~/.docker/config.json 文件。# cat /root/.docker/config.json | base64 -w 0ewoJImF1dGhzIjogewoJCSJoYXJib3IuYnRjZXhhLmNvbSI6IHsKCQkJImF1dGgiOiAiYXpoekxXSjBZMlY0WVRwQ2JHOWphM05vYVc1bE1USXoiCgkJfQoJfSwKCSJIdHRwSGVhZGVycyI6IHsKCQkiVXNlci1BZ2VudCI6ICJEb2NrZXItQ2xpZW50LzE5LjAzLjUgKGxpbnV4KSIKCX0KfQ==# vim harborsecret.yamlapiVersion: v1kind: Secretmetadata: name: harborsecret namespace: defaultdata: .dockerconfigjson: ewoJImF1dGhzIjogewoJCSJoYXJib3IuYnRjZXhhLmNvbSI6IHsKCQkJImF1dGgiOiAiYXpoekxXSjBZMlY0WVRwQ2JHOWphM05vYVc1bE1USXoiCgkJfQoJfSwKCSJIdHRwSGVhZGVycyI6IHsKCQkiVXNlci1BZ2VudCI6ICJEb2NrZXItQ2xpZW50LzE5LjAzLjUgKGxpbnV4KSIKCX0KfQ==type: kubernetes.io/dockerconfigjson# kubectl create -f harborsecret.yaml# kubectl get secretNAME TYPE DATA AGEdefault-token-m5hwn kubernetes.io/service-account-token 3 48mharborsecret kubernetes.io/dockerconfigjson 1 6s 测试pod # cat my-nginx.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: my-nginx namespace: defaultspec: replicas: 4 selector: matchLabels: app: my-nginx template: metadata: labels: app: my-nginx spec: containers: - name: my-nginx image: harbor.btcexa.com/nginx/nginx:latest imagePullPolicy: Always ports: - name: containerPort: 80 imagePullSecrets: - name: harborsecret---apiVersion: v1kind: Servicemetadata: name: my-nginx-service namespace: defaultspec: selector: app: my-nginx type: ClusterIP ports: - name: port: 80 targetPort: 80 创建pod # kubectl apply -f my-mginx.yml# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.254.0.1
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~