java系统找不到指定文件怎么解决
391
2022-09-09
二进制安装centos7安装glusterfs集群为kubernetes提供持久化存储,并作为kubesphere的默认存储
本次操作基于centos7,其它centos版本请参考:
class="data-table" data-id="tea9ccc6-Z3Y4YCmj" data-transient-attributes="class" data-width="956px" style="width: 100%; outline: none; border-collapse: collapse;">
主机ip
主机名
功能
磁盘
192.168.87.100
k8s-master
k8s控制节点
sda-系统盘
192.168.87.101
k8s-node1
k8s工作节点,glusterfs节点
sda-系统盘,sdb-glusterfs数据盘
192.168.87.102
k8s-node2
k8s工作节点,glusterfs节点
sda-系统盘,sdb-glusterfs数据盘
192.168.87.103
k8s-node3
k8s工作节点,glusterfs节点
sda-系统盘,sdb-glusterfs数据盘
2、配置
其它初始化配置与kubeadm安装kubernetes集群系统同,各节点配置主机名,各个节点间配置无密码登录,关闭交换分区,关闭selinux,关闭防火墙,关闭iptables,修改内核参数加载net.bridge和ipv4_ip_forward模块,配置aliyun repo源,配置阿里云docker和k8s repo源, 配置时间同步,开启ipvs。
阿里云centos repo: 源:yum 源
yum install centos-release-gluster
4、glusterfs节点磁盘信息
每个glusterfs节点都执行,以第一个节点为例
[root@k8s-node1 ~]# lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 100G 0 disk ├─sda1 8:1 0 300M 0 part /boot├─sda2 8:2 0 5.9G 0 part └─sda3 8:3 0 93.9G 0 part /sdb 8:16 0 100G 0 disk sr0 11:0 1 1024M 0 rom
二、安装gluserfs集群服务
1、格式化新挂载的sdb盘
每个glusterfs节点都执行,以第一个节点为例
[root@k8s-node1 ~]# mkfs.xfs -i size=512 /dev/sdbmeta-data=/dev/sdb isize=512 agcount=4, agsize=6553600 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0data = bsize=4096 blocks=26214400, imaxpct=25 = sunit=0 swidth=0 blksnaming =version 2 bsize=4096 ascii-ci=0 ftype=1log =internal log bsize=4096 blocks=12800, version=2 = sectsz=512 sunit=0 blks, lazy-count=1realtime =none extsz=4096 blocks=0, rtextents=0
2、安装gluserfs-server
每个glusterfs节点都执行
yum install glusterfs-server
3、加载内核模块
modprobe dm_snapshotmodprobe dm_mirrormodprobe dm_thin_poollsmod | grep dm_snapshotlsmod | grep dm_mirrorlsmod | grep dm_thin_pool
4、安装device-mapper
yum install -y device-mapper*
5、开启glusterfs服务并设置开机启动
systemctl enable glusterdsystemctl start glusterdsystemctl status glusterd
6、在第一个节点k8s-node1上配置gluserfs 集群
gluster peer probe k8s-node2gluster peer probe k8s-node3
7、查看集群信息
[root@k8s-node1 ~]# gluster peer statusNumber of Peers: 2Hostname: 192.168.11.102Uuid: b2b3c79b-463f-409f-9e2e-18b0ff1ae977State: Peer in Cluster (Connected)Other names:k8s-node2Hostname: 192.168.11.103Uuid: ccb63074-f19a-4cf7-84e3-36b96725b3dcState: Peer in Cluster (Connected)Other names:k8s-node3
三、安装heketi服务
由于 GlusterFS 本身不提供 API 调用的方法,因此您可以安装 Heketi,通过用于 Kubernetes 调用的 RESTful API 来管理 GlusterFS 存储卷的生命周期。这样,您的 Kubernetes 集群就可以动态地配置 GlusterFS 存储卷。
1、在k8s-node节点下载heketi服务
本次使用的是10.4.0版本
2、解压缩heketi
[root@k8s-node1 heketi]# tar -zxvf heketi-v10.4.0-release-10.linux.amd64.tar.gz heketi/heketi/heketiheketi/heketi-cliheketi/heketi.json[root@k8s-node1 heketi]# cd heketi/[root@k8s-node1 heketi]# pwd/root/heketi/heketi[root@k8s-node1 heketi]# lsheketi heketi-cli heketi.json[root@k8s-node1 heketi]# cp heketi /usr/bin[root@k8s-node1 heketi]# cp heketi-cli /usr/bin
3、创建 Heketi 服务文件
vi /usr/lib/systemd/system/heketi.service
[Unit]Description=Heketi Server[Service]Type=simpleWorkingDirectory=/var/lib/heketiExecStart=/usr/bin/heketi --config=/etc/heketi/heketi.jsonRestart=on-failureStandardOutput=syslogStandardError=syslog[Install]WantedBy=multi-user.target
4、创建 Heketi 文件夹
mkdir -p /var/lib/heketimkdir -p /etc/heketi
5、创建hekete.json文件
vi /etc/heketi/heketi.json
{ "_port_comment": "Heketi Server Port Number", "port": "8080", "_use_auth": "Enable JWT authorization. Please enable for deployment", "use_auth": false, "_jwt": "Private keys for access", "jwt": { "_admin": "Admin has access to all APIs", "admin": { "key": "123456" }, "_user": "User only has access to /volumes endpoint", "user": { "key": "123456" } }, "_glusterfs_comment": "GlusterFS Configuration", "glusterfs": { "_executor_comment": [ "Execute plugin. Possible choices: mock, ssh", "mock: This setting is used for testing and development.", " It will not send commands to any node.", "ssh: This setting will notify Heketi to ssh to the nodes.", " It will need the values in sshexec to be configured.", "kubernetes: Communicate with GlusterFS containers over", " Kubernetes exec api." ], "executor": "ssh", "_sshexec_comment": "SSH username and private key file information", "sshexec": { "keyfile": "/root/.ssh/id_rsa", "user": "root" }, "_kubeexec_comment": "Kubernetes configuration", "kubeexec": { "host" :" "cert" : "/path/to/crt.file", "insecure": false, "user": "kubernetes username", "password": "password for kubernetes user", "namespace": "Kubernetes namespace", "fstab": "Optional: Specify fstab file on node. Default is /etc/fstab" }, "_db_comment": "Database file name", "db": "/var/lib/heketi/heketi.db", "brick_max_size_gb" : 1024, "brick_min_size_gb" : 1, "max_bricks_per_volume" : 33, "_loglevel_comment": [ "Set log level. Choices are:", " none, critical, error, warning, info, debug", "Default is warning" ], "loglevel" : "debug" }}
在安装 GlusterFS 作为 KubeSphere 集群的存储类型时,必须提供帐户 admin 及其 Secret 值。
6、启动 Heketi
[root@k8s-node1 heketi]# systemctl start heketi[root@k8s-node1 heketi]# systemctl status heketi● heketi.service - Heketi Server Loaded: loaded (/usr/lib/systemd/system/heketi.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2022-04-18 14:45:45 CST; 4s ago Main PID: 109493 (heketi) Tasks: 8 Memory: 8.9M CGroup: /system.slice/heketi.service └─109493 /usr/bin/heketi --config=/etc/heketi/heketi.jsonApr 18 14:45:45 k8s-node1 heketi[109493]: 2022/04/18 14:45:45 no SSH_KNOWN_HOSTS specified, skipping ssh host verificationApr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Loaded ssh executorApr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Adv: Max bricks per volume set to 33Apr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Adv: Max brick size 1024 GBApr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Adv: Min brick size 1 GBApr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Volumes per cluster limit is set to default value of 1000Apr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 GlusterFS Application LoadedApr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Started Node Health Cache MonitorApr 18 14:45:45 k8s-node1 heketi[109493]: [heketi] INFO 2022/04/18 14:45:45 Started background pending operations cleanerApr 18 14:45:45 k8s-node1 heketi[109493]: Listening on port 8080
7、开机启动heketi
systemctl enable heketi
8、为 Heketi 创建拓扑配置文件,该文件包含添加到 Heketi 的集群、节点和磁盘的信息
vi /etc/heketi/topology.json
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "k8s-node1" ], "storage": [ "192.168.11.101" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": true } ] }, { "node": { "hostnames": { "manage": [ "k8s-node2" ], "storage": [ "192.168.11.102" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": true } ] }, { "node": { "hostnames": { "manage": [ "k8s-node3" ], "storage": [ "192.168.11.103" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": true } ] } ] } ]}
请使用您自己的 IP 替换上述 IP 地址。请在 devices 一栏添加您自己的磁盘名称。
9、加载 Heketi 拓扑配置 JSON 文件
配置环境变量:192.168.87.101为k8s-node1的节点ip
[root@k8s-node1 heketi]# export HEKETI_CLI_SERVER=heketi]# echo $HEKETI_CLI_SERVERheketi]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret '123456' topology load --json=/etc/heketi/topology.jsonCreating cluster ... ID: b689b88cf1d243d580e0b91af21aa543 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node k8s-node1 ... ID: e1c64f9e1d0b48d0b35cbed842271f18 Adding device /dev/sdb ... OK Creating node k8s-node2 ... ID: d85251fb83641f3e5d755b8e7794663a Adding device /dev/sdb ... OK Creating node k8s-node3 ... ID: a8aeafbd6136faf1f9b7f54429ac9560 Adding device /dev/sdb ... OK
10、通过heketi-cli查看集群信息
[root@k8s-node1 heketi]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret '123456' topology infoCluster Id: b689b88cf1d243d580e0b91af21aa543 File: true Block: true Volumes: Nodes: Node Id: a8aeafbd6136faf1f9b7f54429ac9560 State: online Cluster Id: b689b88cf1d243d580e0b91af21aa543 Zone: 1 Management Hostnames: k8s-node3 Storage Hostnames: 192.168.11.103 Devices: Id:411e4b0927f72b85d7c9472a644d4494 State:online Size (GiB):99 Used (GiB):0 Free (GiB):99 Known Paths: /dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:1:0 /dev/sdb Bricks: Node Id: d85251fb83641f3e5d755b8e7794663a State: online Cluster Id: b689b88cf1d243d580e0b91af21aa543 Zone: 1 Management Hostnames: k8s-node2 Storage Hostnames: 192.168.11.102 Devices: Id:fdce711e3c5f17236709d06ef03a81e7 State:online Size (GiB):99 Used (GiB):0 Free (GiB):99 Known Paths: /dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:1:0 /dev/sdb Bricks: Node Id: e1c64f9e1d0b48d0b35cbed842271f18 State: online Cluster Id: b689b88cf1d243d580e0b91af21aa543 Zone: 1 Management Hostnames: k8s-node1 Storage Hostnames: 192.168.11.101 Devices: Id:adeca0813972e01b532495245857486f State:online Size (GiB):99 Used (GiB):0 Free (GiB):99 Known Paths: /dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:1:0 /dev/sdb Bricks:
至此,二进制安装gluserfs三节点集群安装完毕,大家可以通过参考之前我的文章:kubernetes部署glusterfs持久化文件存储_North-java的博客-CSDN博客 ,这篇文章时使用kubernetes方式来安装gluserfs集群,文末创建了storageclass、pvc、pv、pod进行了glusterfs作为k8s持久化存储。
三、安装kubesphere
KubeSphere 愿景是打造一个以 Kubernetes 为内核的云原生分布式操作系统,它的架构可以非常方便地使第三方应用与云原生生态组件进行即插即用(plug-and-play)的集成,支持云原生应用在多云与多集群的统一分发和运维管理。本次测试安装的是kubesphere最新版本:v3.2.1
参考文档:在 Kubernetes 上最小化安装 KubeSphere
官方要求:
如需在 Kubernetes 上安装 KubeSphere 3.2.1,您的 Kubernetes 版本必须为:1.19.x、1.20.x、1.21.x 或 1.22.x(实验性支持)。确保您的机器满足最低硬件要求:CPU > 1 核,内存 > 2 GB。在安装之前,需要配置 Kubernetes 集群中的默认存储类型。
准备工作:
第一步、安装kubesphere之前准备好kubenetes集群,如本文开头所示,目前官方支持的版本是1.22,我测试时使用的集群时1.23,目前没发现问题。
[root@k8s-master1 kubesphere]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master1 Ready control-plane,master 102d v1.23.1k8s-node1 Ready worker 102d v1.23.1k8s-node2 Ready worker 18d v1.23.1k8s-node3 Ready worker 5d5h v1.23.1
第二步、安装metrics-server,kubesphere也会安装,但是很慢,所以提前自己安装。
1、下载官方yaml文件
wget --no-check-certificatewget apply -f kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml
3、检查安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
4、检查组件是否运行正常:
使用 kubectl get pod --all-namespaces 查看所有 Pod 是否在 KubeSphere 的相关命名空间中正常运行。如果是,请通过以下命令检查控制台的端口(默认为 30880)
kubectl get svc/ks-console -n kubesphere-system
5、访问管理后台:
确保在安全组中打开了端口 30880,并通过 NodePort (IP:30880) 使用默认帐户和密码 (admin/P@88w0rd) 访问 Web 控制台。
6、查看系统组件
登录控制台后,您可以在系统组件中检查不同组件的状态。如果要使用相关服务,可能需要等待某些组件启动并运行。
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~