Kubernetes集群安装

网友投稿 235 2022-10-20

Kubernetes集群安装

Kubernetes集群安装

环境准备

192.168.1.53 k8s-master

192.168.1.52 k8s-node-1

192.168.1.51 k8s-node-2

设置三台机器的主机名:

Master上执行: [root@localhost ~]# hostnamectl --static set-hostname k8s-master   Node1上执行: [root@localhost ~]# hostnamectl --static set-hostname k8s-node-1   Node2上执行: [root@localhost ~]# hostnamectl --static set-hostname k8s-node-2

在三台机器上设置hosts,均执行如下命令:

echo '192.168.1.53 k8s-master 192.168.1.53 etcd 192.168.1.53 registry 192.168.1.52 k8s-node-1 192.168.1.51 k8s-node-2' >> /etc/hosts cat /etc/hosts

关闭三台机器上的防火墙

systemctl disable firewalld.service systemctl stop firewalld.service

安装相关工具k8s-master k8s-node-1 k8s-node-2 上都执行

yum install -y kubelet kubeadm kubectl kubernetes-cni 可能不能访问源,添加源 #docker yum源 cat >> /etc/yum.repos.d/docker.repo <> /etc/yum.repos.d/kubernetes.repo <

和docker vim /usr/lib/systemd/system/docker.service的用户systemd一致就可以了,不需要修改--exec-opt native.cgroupdriver=systemd

在k8s-master上执行

kubeadm init --apiserver-advertise-address=192.168.1.53 --kubernetes-version=v1.11.3 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=swap 执行报错 echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables systemctl enable docker.service 有警告的 systemctl enable kubelet.service 有警告的

k8s-master 上执行------------参看node的这个文件配置,这里有的配置可能多余-------

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf   # Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true" Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local" Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt" Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0" Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki" #Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=k8s.gcr.io/pause-amd64:3.1" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. #EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS systemctl daemon-reload systemctl start kubelet systemctl status kubelet journalctl -xefu kubelet 查看日志 添加下载源 cat /etc/docker/daemon.json { "registry-mirrors": ["http://68e02ab9.m.daocloud.io"] } systemctl restart docker

先下载镜像  -----------使用下面的镜像版本,这里版本不对应-----

docker pull warrior/pause-amd64:3.0 docker tag warrior/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0 docker pull warrior/etcd-amd64:3.0.17 docker tag warrior/etcd-amd64:3.0.17 gcr.io/google_containers/etcd-amd64:3.0.17 docker pull warrior/kube-apiserver-amd64:v1.6.0 docker tag warrior/kube-apiserver-amd64:v1.6.0 gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 docker pull warrior/kube-scheduler-amd64:v1.6.0 docker tag warrior/kube-scheduler-amd64:v1.6.0 gcr.io/google_containers/kube-scheduler-amd64:v1.6.0 docker pull warrior/kube-controller-manager-amd64:v1.6.0 docker tag warrior/kube-controller-manager-amd64:v1.6.0 gcr.io/google_containers/kube-controller-manager-amd64:v1.6.0 docker pull warrior/kube-proxy-amd64:v1.6.0 docker tag warrior/kube-proxy-amd64:v1.6.0 gcr.io/google_containers/kube-proxy-amd64:v1.6.0 docker pull gysan/dnsmasq-metrics-amd64:1.0 docker tag gysan/dnsmasq-metrics-amd64:1.0 gcr.io/google_containers/dnsmasq-metrics-amd64:1.0 docker pull warrior/k8s-dns-kube-dns-amd64:1.14.1 docker tag warrior/k8s-dns-kube-dns-amd64:1.14.1 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 docker pull warrior/k8s-dns-dnsmasq-nanny-amd64:1.14.1 docker tag warrior/k8s-dns-dnsmasq-nanny-amd64:1.14.1 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 docker pull warrior/k8s-dns-sidecar-amd64:1.14.1 docker tag warrior/k8s-dns-sidecar-amd64:1.14.1 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 docker pull awa305/kube-discovery-amd64:1.0 docker tag awa305/kube-discovery-amd64:1.0 gcr.io/google_container/kube-discovery-amd64:1.0 docker pull gysan/exechealthz-amd64:1.2 docker tag gysan/exechealthz-amd64:1.2 gcr.io/google_container/exechealthz-amd64:1.2 kubeadm init --apiserver-advertise-address=192.168.1.53 --kubernetes-version=v1.11.3 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all kubeadm config images list --kubernetes-version=v1.11.3 三台机器上下载镜像 ---------------------------------------------------- docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3 docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3 k8s.gcr.io/kube-apiserver-amd64:v1.11.3 docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3 docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3 k8s.gcr.io/kube-controller-manager-amd64:v1.11.3 docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3 docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3 k8s.gcr.io/kube-scheduler-amd64:v1.11.3 docker pull mirrorgooglecontainers/etcd-amd64:3.2.18 docker tag mirrorgooglecontainers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18 docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.3 docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.11.3 k8s.gcr.io/kube-proxy-amd64:v1.11.3 docker pull mirrorgooglecontainers/pause:3.1 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0 docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 docker pull coredns/coredns:1.1.3 docker tag coredns/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3 docker pull mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.11 docker tag mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.11 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.11 docker pull mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.11 docker tag mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.11 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.11 docker pull mirrorgooglecontainers/pause-amd64:3.1 docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 出现如下信息,表示安装成功 kubeadm join 192.168.1.53:6443 --token e1d3u3.ilw4fb5cpt51xjf0 --discovery-token-ca-cert-hash sha256:72d07cb010102ae7f1733753a1ac07d0a402125f3326a41056174c69de6fe228 安装成功后,按提示执行下面命令 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

k8s-node-1 k8s-node-2执行

echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables kubeadm join 192.168.1.53:6443 --token e1d3u3.ilw4fb5cpt51xjf0 --discovery-token-ca-cert-hash sha256:72d07cb010102ae7f1733753a1ac07d0a402125f3326a41056174c69de6fe228 --ignore-preflight-errors=swap 因为测试主机上还运行其他服务,关闭swap可能会对其他服务产生影响,所以这里修改kubelet的启动参数 --fail-swap-on=false 去掉这个限制 vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf   # Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local" Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. #EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_DNS_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS systemctl daemon-reload systemctl start kubelet 启动失败没有关系 执行下面命令会去启动 生成配置文件  kubeadm join 192.168.1.53:6443 --token e1d3u3.ilw4fb5cpt51xjf0 --discovery-token-ca-cert-hash sha256:72d07cb010102ae7f1733753a1ac07d0a402125f3326a41056174c69de6fe228 --ignore-preflight-errors=swap 然后查看状态启动成功了 systemctl status kubelet journalctl -xefu kubelet 查看日志

master 上执行

get nodes 状态为NotReady 网络不通 mkdir /docker cd /docker/ kubectl --namespace kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml wget https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml yum -y install wget cat kube-flannel.yml --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "type": "flannel", "delegate": { "isDefaultGateway": true } } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule serviceAccountName: flannel containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.8.0-amd64 command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ] securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ - name: install-cni image: quay.io/coreos/flannel:v0.8.0-amd64 command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ] volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg kubectl --namespace kube-system apply -f ./kube-flannel.yml kubectl get cs kubectl get nodes 过一会状态为Ready了

参考链接: https://blog.csdn.net/zhuchuangang/article/details/76572157/

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:Docker 命令
下一篇:springcloud feign传输List的坑及解决
相关文章

 发表评论

暂时没有评论,来抢沙发吧~