Kubernetes kubeadm快速部署你的集群

网友投稿 452 2022-09-07

Kubernetes kubeadm快速部署你的集群

Kubernetes是什么

• Kubernetes是Google在2014年开源的一个容器集群管理系统,Kubernetes简称K8s。

• Kubernetes用于容器化应用程序的部署,扩展和管理,目标是让部署容器化应用简单高效。

官方网站:API,集群的统一入口,各组件协调者, 以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。

kube-controller-manager

处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。

kube-scheduler

根据调度算法为新创建的Pod选择一个Node节点,可以任意部署, 可以部署在同一个节点上,也可以部署在不同的节点上。

etcd

分布式键值存储系统。 用于保存集群状态数据,比如Pod、Service 等对象信息

Node组件

kubelet

kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期, 比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。 kubelet将每个Pod转换成一组容器。

kube-proxy

在Node节点上实现Pod网络代理, 维护网络规则和四层负载均衡工作 。

docker或rocket

容器引擎,运行容器。

生产环境部署K8s的2种方式

kubeadm

Kubeadm是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。

部署地址:​​:如果你版本是7.1-7.4执行yum update升级版本

[root@localhost ~]# yum update -y[root@localhost ~]# cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core)

关闭防火墙:centos7之后使用的是firewalld,6版本使用的是iptables,不过在centos7里面也可以使用iptables。其实firewalld和iptables都是用户态的工具,基于内核netfilter实现的。安装好centos7之后默认会在netfilter创建一些规则,只放行部分端口可以让外部访问,所以要清空规则就将防火墙关闭。一般会在服务器的流量入口加一个防火墙进行控制,很少会对服务器加防火墙规则。

systemctl stop firewalldsystemctl disable firewalld

禁用swap分区:在k8s当中不禁用不行,不关闭k8s可能启动不了

# 关闭swapswapoff -a # 临时sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久

关闭selinux:

sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久setenforce 0 # 临时

设置主机名:添加规划好的IP和主机名

hostnamectl set-hostname k8s-master hostnamectl set-hostname k8s-node1 hostnamectl set-hostname k8s-node2cat >> /etc/hosts << EOF192.168.179.102 k8s-master192.168.179.103 k8s-node1192.168.179.104 k8s-node2EOF

# 将桥接的IPv4流量传递到iptables的链cat > /etc/sysctl.d/k8s.conf << EOFnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system # 生效

时间同步:确保时间保持一致,因为k8s涉及到了TZ

时区选对了最后同步时区

yum install ntpdate -yntpdate time.windows.com

安装Docker/kubeadm/kubelet【所有节点】

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

安装docker

wget -O /etc/yum.repos.d/docker-ce.repoyum -y install docker-cesystemctl enable docker && systemctl start docker

配置镜像下载加速器:加速器,拉取镜像的时候就快一点

cat > /etc/docker/daemon.json << EOF{ "registry-mirrors": ["restart dockerdocker info

添加阿里云YUM软件源(安装k8s相关组件的,所有节点需要执行)

cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl=install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0 所有节点安装systemctl enable kubelet[root@ks8-node1 ~]# kubekubeadm kubectl kubelet

kubelet:systemd守护进程管理kubeadm:部署工具kubectl:k8s命令行管理工具

安装好之后不要直接启动kubelet,因为还没有配置文件,kubeadm没有帮你生成其配置文件,只有kebeadm执行完成才有该配置文件,并且kubeadm会自动帮你拉起kubelet。所以这里只需要设置开机启动就行

所有节点都会有 kubelet kubeadm kubectl这三个工具

Kubectl(管理k8s集群工具)实际上在master安装就行,只不过在node节点安装了不使用罢了

部署Kubernetes Master(先创建master然后将node加入到k8s集群)

下面操作都在你的master节点上执行

现在全部在Master 192.168.179.102上执行

kubeadm init \ --apiserver-advertise-address=192.168.179.102 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.19.0 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 \ --ignore-preflight-errors=all

Kube init这个命令的意思是创建一个master节点,后面是一些参数

--apiserver-advertise-address=192.168.179.102通告参数用于内网节点连接它的一个地址--image-repository registry.aliyuncs.com/google_containers 指定的是阿里云镜像仓库,默认使用的是国外仓库地址去拉取镜像启动容器,国外网站是访问不了的。这里改为国内的仓库,这样解决了网络不通的问题。--kubernetes-version v1.19.0 \  指定版本,要和上面yum安装的组件版本保持一致

这两个网段不要和现有的物理网络有冲突就行

--service-cidr=10.96.0.0/12 \                一个service的网段--pod-network-cidr=10.244.0.0/16 \      一个pod的网段--ignore-preflight-errors=all                  忽略检查的一些错误

[root@k8s-master ~]# kubeadm init \> --apiserver-advertise-address=192.168.179.102 \> --image-repository registry.aliyuncs.com/google_containers \> --kubernetes-version v1.19.0 \> --service-cidr=10.96.0.0/12 \> --pod-network-cidr=10.244.0.0/16 \> --ignore-preflight-errors=allW1115 14:53:46.739904 1251 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.19.0

kubeadm init 初始化 master节点

(1)[preflight] 环境检查和拉取镜像 ,kubeadm config | images pull(检查当前机器是否满足安装k8s的条件,比如最低配置1核1G)

[preflight] Running pre-flight checks [WARNING NumCPU]: the number of available CPUs 1 is less than the required 2 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at Pulling images required for setting up a Kubernetes cluster[root@k8s-master ~]# docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEregistry.aliyuncs.com/google_containers/kube-proxy v1.19.0 bc9c328f379c 2 months ago 118MBregistry.aliyuncs.com/google_containers/kube-controller-manager v1.19.0 09d665d529d0 2 months ago 111MBregistry.aliyuncs.com/google_containers/kube-apiserver v1.19.0 1b74e93ece2f 2 months ago 119MBregistry.aliyuncs.com/google_containers/kube-scheduler v1.19.0 cbdc8369d8b1 2 months ago 45.7MBregistry.aliyuncs.com/google_containers/etcd 3.4.9-1 d4ca8726196c 4 months ago 253MBregistry.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd25 5 months ago 45.2MBregistry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 9 months ago 683kB

(2)[certs] 生成k8s证书和etcd证书 /etc/kubernetes/pki,生成k8s证书和etcd证书,其实这个证书生成是为了启用Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.179.102][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.179.102 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.179.102 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[root@k8s-master ~]# cd /etc/kubernetes/pki/[root@k8s-master pki]# lsapiserver.crt apiserver-kubelet-client.crt etcd front-proxy-client.keyapiserver-etcd-client.crt apiserver-kubelet-client.key front-proxy-ca.crt sa.keyapiserver-etcd-client.key ca.crt front-proxy-ca.key sa.pubapiserver.key ca.key front-proxy-client.crt

(3)[kubeconfig] 生成kubeconfig文件,这个文件是认证文件,要连接apiserver,需要指定api server的地址,同时需要指明以什么身份去连接apiserver

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig fileYour Kubernetes control-plane has initialized successfully!

K8s的控制面板是已经初始化完成了,要使用集群执行下面步骤

To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

这一步是将连接集群的配置文件拷贝到默认路径下,好使用命令行工具去管理集群。也就是使用kubectl去管理集群了(如果不拷贝这个文件是无法使用kubectl这个命令管理集群)

(4)[kubelet-start] 生成kubelet配置文件

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet

这里生成了kubelet的配置文件,是kubeadm帮你生成的,同时帮你启动,可以使用systemctl status kubelet查看

[root@k8s-master .kube]# systemctl status kubelet● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Fri 2022-04-15 11:05:58 CST; 2min 27s ago Docs: Main PID: 2797 (kubelet) Tasks: 12 Memory: 56.8M CGroup: /system.slice/kubelet.service └─2797 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf ...Apr 15 11:08:04 k8s-master kubelet[2797]: W0415 11:08:04.182477 2797 cni.go:239] Unable to update cni config: no networks found i...i/net.dApr 15 11:08:05 k8s-master kubelet[2797]: E0415 11:08:05.595183 2797 kubelet.go:2103] Container runtime network not ready: Networ...ializedApr 15 11:08:09 k8s-master kubelet[2797]: W0415 11:08:09.183292 2797 cni.go:239] Unable to update cni config: no networks found i...i/net.dApr 15 11:08:10 k8s-master kubelet[2797]: E0415 11:08:10.616475 2797 kubelet.go:2103] Container runtime network not ready: Networ...ializedApr 15 11:08:14 k8s-master kubelet[2797]: W0415 11:08:14.184185 2797 cni.go:239] Unable to update cni config: no networks found i...i/net.dApr 15 11:08:15 k8s-master kubelet[2797]: E0415 11:08:15.635718 2797 kubelet.go:2103] Container runtime network not ready: Networ...ializedApr 15 11:08:19 k8s-master kubelet[2797]: W0415 11:08:19.184774 2797 cni.go:239] Unable to update cni config: no networks found i...i/net.dApr 15 11:08:20 k8s-master kubelet[2797]: E0415 11:08:20.659029 2797 kubelet.go:2103] Container runtime network not ready: Networ...ializedApr 15 11:08:24 k8s-master kubelet[2797]: W0415 11:08:24.184898 2797 cni.go:239] Unable to update cni config: no networks found i...i/net.dApr 15 11:08:25 k8s-master kubelet[2797]: E0415 11:08:25.668610 2797 kubelet.go:2103] Container runtime network not ready: Networ...ializedHint: Some lines were ellipsized, use -l to show in full.

--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2

(5)[control-plane] 部署管理节点组件,用镜像启动容器

[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"

帮你启动kube-apiserve kube-controller-manage kube-scheduler etcd这些组件

[root@k8s-master ~]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-6d56c8448f-ddt97 0/1 Pending 0 43mcoredns-6d56c8448f-lwn8m 0/1 Pending 0 43metcd-k8s-master 1/1 Running 0 43mkube-apiserver-k8s-master 1/1 Running 0 43mkube-controller-manager-k8s-master 1/1 Running 0 43mkube-proxy-xth6p 1/1 Running 0 43mkube-scheduler-k8s-master 1/1 Running 0 43m

(6)[etcd] 部署etcd数据库,用镜像启动容器

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

(7)[upload-config] [kubelet] [upload-certs] 上传配置文件到k8s中(将其配置文件存储在k8s当中,其他节点要加入集群会拉取这个配置文件去启动)

[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster

(8)[mark-control-plane] 给管理节点添加一个标签 node-role.kubernetes.io/master='',再添加一个污点[node-role.kubernetes.io/master:NoSchedule]

[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][root@k8s-master ~]# kubectl describe node k8s-masterTaints: node.kubernetes.io/not-ready:NoExecute node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoSchedule

(9)[bootstrap-token] 自动为kubelet颁发证书,这个token其实就是为了其他节点加入集群颁发证书用的(也就是为每个节点颁发证书)

[bootstrap-token] Using token: u7iclt.miuss90cwnjokuje[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

(10)[addons] 部署插件,CoreDNS、kube-proxy

[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxy

最后,拷贝连接k8s集群的认证文件到默认路径下,这样就可以使用kubectl去查看了(别忘记了!!!!!!!!!!!!!!!!!!!)

[root@k8s-master ~]# mkdir -p $HOME/.kube[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config[root@k8s-master ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-master NotReady master 7m v1.19.0

[root@k8s-master manifests]# lsetcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml

上面初始化工作总结如下:

kubeadm init初始化工作:

[preflight] 环境检查和拉取镜像 kubeadm config images pull[certs] 生成k8s证书和etcd证书 /etc/kubernetes/pki[kubeconfig] 生成kubeconfig文件[kubelet-start] 生成kubelet配置文件[control-plane] 部署管理节点组件,用镜像启动容器  kubectl get pods -n kube-system[etcd] 部署etcd数据库,用镜像启动容器[upload-config] [kubelet] [upload-certs] 上传配置文件到k8s中[mark-control-plane] 给管理节点添加一个标签 node-role.kubernetes.io/master='',再添加一个污点[node-role.kubernetes.io/master:NoSchedule][bootstrap-token] 自动为kubelet颁发证书[addons] 部署插件,CoreDNS、kube-proxy

加入Kubernetes Node

上面k8s节点初始化完成,还剩下两步没有完成

一个是部署网络

You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.179.102:6443 --token u7iclt.miuss90cwnjokuje \--discovery-token-ca-cert-hash sha256:a3f0566e54fee79bff76bcd87c49c656a339dbdf59f874ac90992418f6a94157

在192.168.179.103/104(Node)执行

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:使用之前kube init生成的指令

[root@k8s-node1 ~]# kubeadm join 192.168.111.6:6443 --token 61d4wg.ktsy9ru26oseb2aa --discovery-token-ca-cert-hash sha256:536ec429a1e2e1bd62eda768623805a7ae2a84aba650c5b1d09011bbf95b640e [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.14. Latest validated version: 19.03[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node1 ~]# kubeadm join 192.168.179.102:6443 --token u7iclt.miuss90cwnjokuje --discovery-token-ca-cert-hash sha256:a3f0566e54fee79bff76bcd87c49c656a339dbdf59f874ac90992418f6a94157 [root@k8s-master ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-master NotReady master 60m v1.19.0k8s-node1 NotReady 31s v1.19.0k8s-node2 NotReady 91s v1.19.0

[root@master ~]# kubectl get csrNAME AGE SIGNERNAME REQUESTOR CONDITIONcsr-gp8cd 15m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:xxwuv9 Approved,Issuedcsr-hdmjf 22m kubernetes.io/kube-apiserver-client-kubelet system:node:master Approved,Issuedcsr-tlfnn 16m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:xxwuv9 Approved,Issued

可以看到没有就绪,这个时候就需要部署容器网络插件了

[root@k8s-master ~]# journalctl -u kubelet > a.xtx 可以查看日志,可以看到容器网络没有准备好,需要安装CNI插件,网络插件很多种,有很多公司开发的k8s网络组件比如calico,主流使用calico,建议使用这个

Nov 15 14:56:50 k8s-master kubelet[1616]: E1115 14:56:50.434600 1616 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitializedNov 15 14:56:50 k8s-master kubelet[1616]: E1115 14:56:50.485711 1616 kubelet.go:2183] node "k8s-master" not found

默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:

[root@master ~]# kubeadm token create --print-join-commandW0418 17:38:02.439937 18435 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]kubeadm join 192.168.111.6:6443 --token qqf3xv.3ovyztz2jzkjsklq --discovery-token-ca-cert-hash sha256:861037155eac93e6890bbfccad7471e5cb56e710b240c8f2641212c2c0ecb460

部署calico网络

[root@k8s-master ~]# wget init指定的一样

将注释去掉 - name: CALICO_IPV4POOL_CIDR value: "192.168.0.0/16"改为 - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16"

这个网段是kube init的 --pod-network-cidr=10.244.0.0/16网段,也就是pod网段

[root@k8s-master ~]# kubectl apply -f calico.yaml

可以看到以容器方式去不是网络组件,这里可以通过READY查看网络是否准备就绪

[root@k8s-master ~]# kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEcalico-kube-controllers-5c6f6b67db-q5qb6 0/1 Pending 0 39scalico-node-6hgrq 0/1 Init:0/3 0 39scalico-node-jxh4t 0/1 Init:2/3 0 39scalico-node-xjklb 0/1 Init:1/3 0 39s

这些镜像下载比较慢,下载好之后启动完毕再查看node状态,下面是查看calico使用了哪些镜像

[root@k8s-master ~]# cat calico.yaml | grep image image: calico/cni:v3.16.5 image: calico/cni:v3.16.5 image: calico/pod2daemon-flexvol:v3.16.5 image: calico/node:v3.16.5 image: calico/kube-controllers:v3.16.5

[root@k8s-master ~]# kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEcalico-kube-controllers-5c6f6b67db-q5qb6 1/1 Running 0 3m52scalico-node-6hgrq 1/1 Running 0 3m52scalico-node-jxh4t 1/1 Running 0 3m52scalico-node-xjklb 1/1 Running 0 3m52scoredns-6d56c8448f-ddt97 1/1 Running 0 82mcoredns-6d56c8448f-lwn8m 1/1 Running 0 82metcd-k8s-master 1/1 Running 0 82mkube-apiserver-k8s-master 1/1 Running 0 82mkube-controller-manager-k8s-master 1/1 Running 0 82mkube-proxy-7wgls 1/1 Running 0 22mkube-proxy-vkt7g 1/1 Running 0 23mkube-proxy-xth6p 1/1 Running 0 82mkube-scheduler-k8s-master 1/1 Running 0 82m[root@k8s-master ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-master Ready master 3d9h v1.19.0k8s-node1 Ready 3d8h v1.19.0k8s-node2 Ready 3d8h v1.19.0

测试kubernetes集群

验证Pod工作验证Pod网络通信验证DNS解析

在Kubernetes集群中创建一个pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx$ kubectl expose deployment nginx --port=80 --type=NodePort$ kubectl get pod,svc

访问地址:init来初始化也不会成功,因为第一次运行环境是错误的环境。这个需要将清理当前环境,保持一个纯净的环境再去执行初始化

1、清空当前初始化环境

kubeadm reset

2、calico pod未准备就绪,那么需要每个节点手动拉取镜像看是否拉取到

grep image calico.yaml  每个节点拉取看看是否拉取完毕

docker pull calico/xxx

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:50个点教你区分酒店运营中的市场营销与销售工作!
下一篇:Kubernetes 上手动部署 Prometheus
相关文章

 发表评论

暂时没有评论,来抢沙发吧~