Kubernetes在CentOS7下二进制文件离线安装

网友投稿 315 2022-10-16

Kubernetes在CentOS7下二进制文件离线安装

Kubernetes在CentOS7下二进制文件离线安装

一、下载Kubernetes(简称K8S)二进制文件

1)从上边的网址中选择相应的版本,本文以1.9.1版本为例,从 CHANGELOG页面 下载二进制文件到/root目录

2)组件选择:选择Service Binaries中的kubernetes-server-linux-amd64.tar.gz  该文件已经包含了 K8S所需要的全部组件,无需单独下载Client等组件。

二、 安装思路

解压kubernetes-server-linux-amd64.tar.gz 二进制包,将service/bin/下的可执行二进制文件复制到/usr/bin/下,并设置对应的systemd文件和配置文件。

三、 节点规划

节点IP

角色

安装组件

192.168.1.10

Master

etcdapiserver,kube-controller-manager,kube-scheduler

192.168.1.128

Node1

Kubelet,kube-proxy,flannel

其中,etcd为k8s的数据库,etcd保存kubernetes中增删改查等操作。

提前做好/etc/hosts文件绑定

sed -i '$a 192.168.1.10 master' /etc/hosts

sed -i '$a 192.168.1.128 node1' /etc/hosts

四、 部署master节点

1)复制对应的二进制文件到/usr/bin目录下  2)创建systemd service启动服务文件  3)创建service 中对应的配置参数文件  4)将该应用加入到开机自启

0.   离线安装docker服务

解压docker.tar.gz文件,然后使用rpm命令忽略依赖关系强制安装

tar zxf docker.tar.gz

cd docker

rpm -ivh *.rpm --nodeps --force

启动docker:

systemctl daemon-reload

systemctl start docker

1 . etcd数据库安装  (1) ectd数据库安装  下载:K8S需要etcd作为数据库。以 v3.2.11为例,下载地址如下:  解压,将etcd、etcdctl二进制文件复制到/usr/bin目录

tar zxf etcd-v3.2.11-linux-amd64.tar.gz

cd etcd-v3.2.11-linux-amd64

cp etcd etcdctl /usr/bin/

(2)设置 etcd.service服务文件  在/usr/lib/systemd/system/目录里创建etcd.service vim /usr/lib/systemd/system/etcd.service,内容如下:

[Unit]

Description=etcd.service

[Service]

Type=notify

TimeoutStartSec=0

Restart=always

WorkingDirectory=/var/lib/etcd

EnvironmentFile=-/etc/etcd/etcd.conf

ExecStart=/usr/bin/etcd

[Install]

WantedBy=multi-user.target

(3)创建红色字体的路径:

mkdir -p /var/lib/etcd && mkdir -p /etc/etcd/

(4)创建etcd.conf文件:

vim /etc/etcd/etcd.conf

并写入内容:

ETCD_NAME=ETCD Server

ETCD_DATA_DIR="/var/lib/etcd/"

ETCD_LISTEN_CLIENT_URLS=daemon-reload

systemctl start etcd.service

(6)检查etcd是否启动成功

[root@server1 ~]# etcdctl cluster-health

member 8e9e05c52164694d is healthy: got healthy result from is healthy

有此结果表示成功

(7)etcd默认监控服务器的TCP/2379端口

[root@server1 ~]# netstat -lntp | grep etcd

tcp         0      0 127.0.0.1:2380     0.0.0.0:*             LISTEN      11376/etcd

tcp6       0      0 :::2379                 :::*                    LISTEN      11376/etcd

1.   安装kube-apiserver服务

注意:服务器或者虚拟机网卡一定要配置默认网关,否则就会出现服务不能启动的问题!!!

(1)解压之前下载好的kubernetes-server-linux-amd64.tar.gz ,将其子目录server/bin下的kube-apiserver、 kube-controller-manager 、kube-scheduler复制到/usr/bin/目录下

tar zxf kubernetes-server-linux-amd64.tar.gz

cd kubernetes/server/bin/

cp kube-apiserver kube-controller-manager kube-scheduler /usr/bin/

(2)添加/usr/lib/systemd/system/kube-apiserver.service文件

vim /usr/lib/systemd/system/kube-apiserver.service,内容如下:

[Unit]

Description=Kubernetes API Server

After=etcd.service

Wants=etcd.service

[Service]

EnvironmentFile=/etc/kubernetes/apiserver

ExecStart=/usr/bin/kube-apiserver  \

$KUBE_ETCD_SERVERS \

$KUBE_API_ADDRESS \

$KUBE_API_PORT \

$KUBE_SERVICE_ADDRESSES \

$KUBE_ADMISSION_CONTROL \

$KUBE_API_LOG \

$KUBE_API_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

(3)创建kube-apiserver需要的路径

mkdir -p /etc/kubernetes/

(4)建立kube-apiserver的配置文件: /etc/kubernetes/apiserver

vim /etc/kubernetes/apiserver,内容如下:

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_API_PORT="--port=8080"

KUBELET_PORT="--kubelet-port=10250"

KUBE_ETCD_SERVERS="--etcd-servers=daemon-reload

systemctl start kube-apiserver.service

(6)查看是否启动成功

[root@server1 bin]# netstat -lntp | grep kube

tcp6       0      0 :::6443                 :::*           LISTEN      11471/kube-apiserve

tcp6       0      0 :::8080                 :::*           LISTEN      11471/kube-apiserve

3. 部署kube-controller-manager

(1)添加/usr/lib/systemd/system/kube-controller-manager.service文件

vim /usr/lib/systemd/system/kube-controller-manager.service,内容如下:

[Unit]

Description=Kubernetes Scheduler

After=kube-apiserver.service

Requires=kube-apiserver.service

[Service]

EnvironmentFile=-/etc/kubernetes/controller-manager

ExecStart=/usr/bin/kube-controller-manager \

$KUBE_MASTER \

$KUBE_CONTROLLER_MANAGER_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

(2)添加配置文件controller-manager

vim /etc/kubernetes/controller-manager ,内容如下:

KUBE_MASTER="--master=KUBE_CONTROLLER_MANAGER_ARGS=" "

(3)启动kube-controller-manager

systemctl daemon-reload

systemctl start kube-controller-manager.service

(4)验证kube-controller-manager是否启动成功

[root@server1 bin]# netstat -lntp | grep kube-controll

tcp6       0      0 :::10252     :::*    LISTEN      11546/kube-controll

4. 部署kube-scheduler服务

(1)编辑/usr/lib/systemd/system/kube-scheduler.service

vim /usr/lib/systemd/system/kube-scheduler.service,内容如下:

[Unit]

Description=Kubernetes Scheduler

After=kube-apiserver.service

Requires=kube-apiserver.service

[Service]

User=root

EnvironmentFile=-/etc/kubernetes/scheduler

ExecStart=/usr/bin/kube-scheduler \

$KUBE_MASTER \

$KUBE_SCHEDULER_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

(2)编辑kube-scheduler配置文件

vim /etc/kubernetes/scheduler,内容如下:

KUBE_MASTER="--master=--log-dir=/home/k8s-t/log/kubernetes --v=2"

(3)启动kube-scheduler

systemctl daemon-reload

systemctl start kube-scheduler.service

(4)验证是否启动

[root@server1 bin]# netstat -lntp | grep kube-schedule

tcp6       0      0 :::10251        :::*         LISTEN      11605/kube-schedule

5. 将kubernetes/service/bin设置为默认搜索路径

sed -i '$a export PATH=$PATH:/root/kubernetes/server/bin/' /etc/profile

source /etc/profile

6.查看几个节点的状态:

[root@server1 bin]# kubectl get cs

NAME                 STATUS    MESSAGE              ERROR

scheduler            Healthy   ok

controller-manager   Healthy   ok

etcd-0               Healthy   {"health": "true"}

至此,k8smaster节点安装完毕

Master一键重启服务:

for i in etcd kube-apiserver kube-controller-manager kube-scheduler docker;do systemctl restart $i;done

====================================

Node节点安装:

Node节点安装需要复制kubernetes/service/bin的kube-proxy,kubelet到/usr/bin/目录下,以及flannel二进制文件包

1.   离线安装docker服务

解压docker.tar.gz文件,然后使用rpm命令忽略依赖关系强制安装

tar zxf docker.tar.gz

cd docker

rpm -ivh *.rpm --nodeps --force

2.    修改docker启动文件:

vi /usr/lib/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine

Documentation=firewalld.service

Wants=network-online.target

[Service]

Type=notify

# the default is not to use systemd for cgroups because the delegate issues still

# exists and systemd currently does not support the cgroup feature set required

# for containers run by docker

ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS

ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead

# in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.

# Only systemd 226 and above support this version.

#TasksMax=infinity

TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

# kill only the docker process, not all processes in the cgroup

KillMode=process

# restart the docker process if it exits prematurely

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s

[Install]

WantedBy=multi-user.target

3.启动docker:

systemctl daemon-reload

systemctl start docker

3.   解压k8s二进制文件包

tar zxf kubernetes-server-linux-amd64.tar.gz

cd /root/kubernetes/server/bin/

cp kube-proxy kubelet /usr/bin/

4.   安装kube-proxy服务

(1)添加/usr/lib/systemd/system/kube-proxy.service文件,内容如下:

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=\

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_MASTER \

$KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

(2)创建/etc/kubernetes目录

mkdir -p /etc/kubernetes

(3)添加/etc/kubernetes/proxy配置文件

vim /etc/kubernetes/proxy,内容如下:

KUBE_PROXY_ARGS=""

(4)添加/etc/kubernetes/config文件

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow_privileged=false"

KUBE_MASTER="--master=daemon-reload

systemctl start kube-proxy.service

(6)查看kube-proxy启动状态

[root@server2 bin]# netstat -lntp | grep kube-proxy

tcp         0      0 127.0.0.1:10249    0.0.0.0:*        LISTEN      11754/kube-proxy

tcp6       0      0 :::10256                :::*               LISTEN      11754/kube-proxy

5.   安装kubelet服务

(1)    创建/usr/lib/systemd/system/kubelet.service文件

vim /usr/lib/systemd/system/kubelet.service,内容如下:

[Unit]

Description=Kubernetes Kubelet Server

Documentation=$KUBELET_ARGS

Restart=on-failure

KillMode=process

[Install]

WantedBy=multi-user.target

(2)    创建kubelet所需文件路径

mkdir -p /var/lib/kubelet

(3)    创建kubelet配置文件

vim /etc/kubernetes/kubelet,内容如下:

KUBELET_HOSTNAME="--hostname-override=192.168.1.128"

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=reg.docker.tb/harbor/pod-infrastructure:latest"

KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubeconfig"

(4)    添加/var/lib/kubelet/kubeconfig文件

然后还要添加一个配置文件,因为1.9.0在kubelet里不再使用KUBELET_API_SERVER来跟API通信,而是通过别一个yaml的配置来实现。

vim /var/lib/kubelet/kubeconfig ,内容如下:

apiVersion: v1

kind: Config

users:

- name: kubelet

clusters:

- name: kubernetes

cluster:

server: context:

cluster: kubernetes

user: kubelet

name: service-account-context

current-context: service-account-context

(5)启动kubelet

关闭swap分区:swapoff  -a (不然kubelet启动报错)

systemctl daemon-reload

systemctl start kubelet.service

(4)查看kubelet文件状态

[root@server2 ~]# netstat -lntp | grep kubelet

tcp        0      0 127.0.0.1:10248     0.0.0.0:*            LISTEN      15410/kubelet

tcp6       0      0 :::10250                :::*                   LISTEN      15410/kubelet

tcp6       0      0 :::10255                :::*                   LISTEN      15410/kubelet

tcp6       0      0 :::4194                 :::*                    LISTEN      15410/kubelet

6.   搭建flannel网络

Flannel可以使整个集群的docker容器拥有唯一的内网IP,并且多个node之间的docker0可以互相访问

(1) Flannel网络只需要安装在node节点上,不需要安装在etcd节点和master节点上,flannel的下载地址为:下载之后解压tar zxf flannel-v0.10.0-linux-amd64.tar.gz

将二进制文件flanneld、mk-docker-opts.sh拷贝到/usr/bin/下,即安装完成flannel

cp flanneld mk-docker-opts.sh /usr/bin/

(3) 编写flannel的systemd文件,便于启动

vi /usr/lib/systemd/system/flanneld.service,内容如下:

[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

After=network-online.target

Wants=network-online.target

After=etcd.service

Before=docker.service

[Service]

Type=notify

EnvironmentFile=-/etc/sysconfig/flanneld

EnvironmentFile=-/etc/sysconfig/docker-network

ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS

ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

Restart=on-failure

[Install]

WantedBy=multi-user.target

RequiredBy=docker.service

(4)编写flannel配置文件,路径对应上文的/etc/sysconfig/flanneld

vim /etc/sysconfig/flanneld,内容如下:

# flanneld configuration options

# etcd url location. Point this to the server where etcd runs

FLANNEL_ETCD="etcd config key. This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_KEY="/atomic.io/network/"

(5)    vi /usr/bin/flanneld-start

#!/bin/sh

exec /usr/bin/flanneld \

-etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS:-${FLANNEL_ETCD}} \

-etcd-prefix=${FLANNEL_ETCD_PREFIX:-${FLANNEL_ETCD_KEY}} \

"$@"

赋执行权限

chmod +x /usr/bin/flanneld-start

(6)    在etcd节点定义一个flannel网络

etcdctl mk /atomic.io/network/config '{"Network":"172.18.0.0/24"}'

(7)    停止docker和关闭docker0

因为flannel将覆盖docker0网络,所以最好在开启flannel之前关闭docker0网卡和docker

systemctl stop docker

关闭docker服务后,kubelet也会关闭,master会显示node节点不可用,这是正常现象,等flannel网络设置完毕之后再开启kubelet和docker即可。

(8)    启动flannel服务

systemctl daemon-reload

systemctl start flanneld

(9)    设置docker0网桥的IP地址

mkdir -p /usr/lib/systemd/system/docker.service.d

cd /usr/lib/systemd/system/docker.service.d

mk-docker-opts.sh -i

source /run/flannel/subnet.env

vi /usr/lib/systemd/system/docker.service.d/flannel.conf

[Service]

EnvironmentFile=-/run/flannel/docker

(10) 重启docker和kubelet服务

systemctl restart docker

systemctl restart kubelet

(11) 确认docker0地址和flannel位于同一网段

ifconfig

到此完成了flannel覆盖网络的设置。

各个node之间的docker0就可以互相访问了。

Etcd数据库操作

删除一个键值:

举例:etcdctl mk /atomic.io/network/config '{"Network":"172.18.0.0/24"}'不小心写错了,可以删除值,重新赋值:

etcdctl rm /atomic.io/network/config

然后重新赋值就可以了,然后需要去node上删除/run/flannel/subnet.env文件,重启flanneld即可获取新的IP网段。

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:java spring mvc处理器映射器介绍
下一篇:Kafka实战:如何把Kafka消息时延秒降10倍
相关文章

 发表评论

暂时没有评论,来抢沙发吧~