java系统找不到指定文件怎么解决
244
2022-09-11
k8s-示例
1.手动调整pod数量
kubectl scale 对运行在k8s 环境中的pod 数量进行扩容(增加)或缩容(减小)
#产看当前pod数量root@k8s-master:/usr/local/haproxy_exporter# kubectl get deployment -n linux36NAME READY UP-TO-DATE AVAILABLE AGElinux36-nginx-deployment 1/1 1 1 21hlinux36-tomcat-app1-deployment 1/1 1 1 21h#查看命令使用帮助root@k8s-master:/usr/local/haproxy_exporter# kubectl --help | grep scale scale Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationController#执行扩容/缩容root@k8s-master:/usr/local/haproxy_exporter# kubectl scale deployment/linux36-tomcat-app1-deployment --replicas=2 -n linux36deployment.extensions/linux36-tomcat-app1-deployment scaled#验证root@k8s-master:/usr/local/haproxy_exporter# kubectl get deployment -n linux36NAME READY UP-TO-DATE AVAILABLE AGElinux36-nginx-deployment 1/1 1 1 21hlinux36-tomcat-app1-deployment 2/2 2 2 21h
2.HPA自动伸缩pod数量
kubectl autoscale 自动控制在k8s集群中运行的pod数量(水平自动伸缩),需要提前设置pod范围及触发条件
k8s从1.1版本开始增加了名称为HPA(Horizontal Pod Autoscaler)的控制器,用于实现基于pod中资源(CPU/Memory)利用率进行对pod的自动扩缩容功能的实现,早期的版本只能基于Heapster组件实现对CPU利用率做为触发条件,但是在k8s 1.11版本开始使用Metrices Server完成数据采集,然后将采集到的数据通过API(Aggregated API,汇总API),例如metrics.k8s.io、custom.metrics.k8s.io、external.metrics.k8s.io,然后再把数据提供给HPA控制器进行查询,以实现基于某个资源利用率对pod进行扩缩容的目的。
控制管理器默认每隔15s(可以通过–horizontal-pod-autoscaler-sync-period修改)查询metrics的资源使用情况#支持以下三种metrics类型:->预定义metrics(比如Pod的CPU)以利用率的方式计算->自定义的Pod metrics,以原始值(raw value)的方式计算->自定义的object metrics#支持两种metrics查询方式:->Heapster->自定义的REST API#支持多metrics
2.1.准备metrics-server
使用metrics-server作为HPA数据源
clone代码:
git clone metrics-server/
镜像
docker pull k8s.gcr.io/metrics-server-amd64:v0.3.3docker load -i metrics-server-amd64_v0.3.3.tar.gzdocker tag 1a76c5318f6d harbor.gesila.com/k8s/metrics-server-amd64:v0.3.3docker push harbor.gesila.com/k8s/metrics-server-amd64:v0.3.3
2.2.yaml文件
修改镜像源
metrics-server-deployment.yaml
---apiVersion: v1kind: ServiceAccountmetadata: name: metrics-server namespace: kube-system---apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: metrics-server namespace: kube-system labels: k8s-app: metrics-serverspec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: k8s-app: metrics-server spec: serviceAccountName: metrics-server volumes: # mount in tmp so we can safely use from-scratch images and/or read-only containers - name: tmp-dir emptyDir: {} containers: - name: metrics-server #image: k8s.gcr.io/metrics-server-amd64:v0.3.0 image: harbor.gesila.com/k8s/metrics-server-amd64:v0.3.3 imagePullPolicy: IfNotPresent command: - /metrics-server - --metric-resolution=30s - --kubelet-insecure-tls volumeMounts: - name: tmp-dir mountPath: /tmp
2.3.创建metrics-server服务
root@k8s-master:~/metrics-server-master# kubectl apply -f deploy/1.8+/clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator createdrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io createdserviceaccount/metrics-server createddeployment.extensions/metrics-server createdservice/metrics-server createdclusterrole.rbac.authorization.k8s.io/system:metrics-server createdclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
kubectl get pods -n kube-system
2.4.修改controller-manager启动参数
kube-controller-manager --help | grep horizontal-pod-autoscaler-sync-period
vim /etc/systemd/system/kube-controller-manager.service----------------------------------------------------------------------------[Unit]Description=Kubernetes Controller ManagerDocumentation=\--address=127.0.0.1 \--master=\--allocate-node-cidrs=true \--service-cluster-ip-range=10.20.0.0/16 \--cluster-cidr=172.31.0.0/16 \--cluster-name=kubernetes \--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \--root-ca-file=/etc/kubernetes/ssl/ca.pem \--horizontal-pod-autoscaler-use-rest-clients=true \--leader-elect=true \--horizontal-pod-autoscaler-use-rest-clients=false \#不使用其他客户端数据--horizontal-pod-autoscaler-sync-period=10s \#数据采集周期间隔时间--v=2Restart=on-failureRestartSec=5[Install]WantedBy=multi-user.target
systemctl daemon-reload && systemctl restart kube-controller-managerps -ef |grep kube-controller-manager
2.5.通过命令配置扩缩容
root@k8s-master:~/metrics-server-master# kubectl get pods -n linux36NAME READY STATUS RESTARTS AGElinux36-nginx-deployment-598cb57658-7725v 1/1 Running 0 3h39mlinux36-tomcat-app1-deployment-74c7768479-877fm 1/1 Running 1 27hroot@k8s-master:~/metrics-server-master# kubectl get deployment -n linux36NAME READY UP-TO-DATE AVAILABLE AGElinux36-nginx-deployment 1/1 1 1 27hlinux36-tomcat-app1-deployment 1/1 1 1 27hroot@k8s-master:~/metrics-server-master# kubectl autoscale deployment/linux36-nginx-deployment --min=1 --max=3 --cpu-percent=80 -n linux36horizontalpodautoscaler.autoscaling/linux36-nginx-deployment autoscaled#验证信息:kubectl describe deployment/linux36-nginx-deployment -n linux36--------------------------------------------DESIRED 最终期望处于READY状态的副本数CURRENT 当前的副本总数UP-TO-DATE 当前完成更新的副本数AVAILABLE 当前可用的副本数
2.6.yaml文件中定义扩缩容配置
kind: DeploymentapiVersion: extensions/v1beta1metadata: labels: app: linux36-tomcat-app1-deployment-label name: linux36-tomcat-app1-deployment namespace: linux36spec: replicas: 1 selector: matchLabels: app: linux36-tomcat-app1-selector template: metadata: labels: app: linux36-tomcat-app1-selector spec: containers: - name: linux36-tomcat-app1-container image: harbor.magedu.net/linux36/tomcat-app1:v1 #command: ["/apps/tomcat/bin/run_tomcat.sh"] #imagePullPolicy: IfNotPresent imagePullPolicy: Always ports: - containerPort: 8080 protocol: TCP name: env: - name: "password" value: "123456" - name: "age" value: "18" resources: limits: cpu: 2 memory: "2048Mi" requests: cpu: 500m memory: "1024Mi" volumeMounts: - name: linux36-images mountPath: /data/tomcat/webapps/myapp/images readOnly: false - name: linux36-static mountPath: /data/tomcat/webapps/myapp/static readOnly: false volumes: - name: linux36-images nfs: server: 192.168.47.47 path: /data/k8sdata/linux36/images - name: linux36-static nfs: server: 192.168.47.47 path: /data/k8sdata/linux36/static #nodeSelector: #位置在当前containers参数结束后的部分 # project: linux36 #指定的label标签 ---kind: ServiceapiVersion: v1metadata: labels: app: linux36-tomcat-app1-service-label name: linux36-tomcat-app1-service namespace: linux36spec: type: NodePort ports: - name: port: 80 protocol: TCP targetPort: 8080 nodePort: 30003 selector: app: linux36-tomcat-app1-selector---apiVersion: autoscaling/v2beta1 #定义API版本kind: HorizontalPodAutoscaler #对象类型metadata: #定义对象元数据 namespace: linux36 #创建后隶属的namespace name: linux36-tomcat-app1-podautoscaler #对象名称 labels: 这样的label标签 app: linux36-tomcat-app1 #自定义的label名称 version: v2beta1 #自定义的api版本spec: #定义对象具体信息 scaleTargetRef: #定义水平伸缩的目标对象,Deployment、ReplicationController/ReplicaSet apiVersion: apps/v1 #API版本,HorizontalPodAutoscaler.spec.scaleTargetRef.apiVersion kind: Deployment #目标对象类型为deployment name: linux36-tomcat-app1-deployment #deployment 的具体名称 minReplicas: 2 #最小pod数 maxReplicas: 5 #最大pod数 metrics: #调用metrics数据定义 - type: Resource #类型为资源 resource: #定义资源 name: cpu #资源名称为cpu targetAverageUtilization: 80 #CPU使用率 - type: Resource #类型为资源 resource: #定义资源 name: memory #资源名称为memory targetAverageValue: 200Mi #memory使用率
2.7.验证HPA
kubectl get hpa -n linux36kubectl describe hpa linux36-nginx-deployment -n linux36
3.动态修改资源内容kubectl edit
用于临时修改某些配置后需要立即生效的场景
root@k8s-master:/usr/local/haproxy_exporter# kubectl get deployment -n linux36NAME READY UP-TO-DATE AVAILABLE AGElinux36-nginx-deployment 1/1 1 1 21hlinux36-tomcat-app1-deployment 1/1 1 1 21h#修改副本数/镜像地址kubectl edit deployment linux36-nginx-deployment -n linux36 #验证副本数是否与edit编辑之后的一致root@k8s-master:/usr/local/haproxy_exporter# kubectl get deployment -n linux36NAME READY UP-TO-DATE AVAILABLE AGElinux36-nginx-deployment 2/2 2 2 21hlinux36-tomcat-app1-deployment 1/1 1 1 21hroot@k8s-master:/usr/local/haproxy_exporter# kubectl get pods -n linux36NAME READY STATUS RESTARTS AGElinux36-nginx-deployment-6d858d49d-2l6pd 1/1 Running 1 21hlinux36-nginx-deployment-6d858d49d-rdhbg 1/1 Running 0 31slinux36-tomcat-app1-deployment-74c7768479-877fm 1/1 Running 1 21h
4.定义node资源标签
lable是一个键值对,创建pod的时候会查询那些node有这个标签,只会将pod创建在符合指定label值的node节点上
4.1.查看当前node label
kubectl describe node 192.168.47.53
4.2.自定义node label并验证
root@k8s-master:/usr/local/haproxy_exporter# kubectl label node 192.168.47.53 project=linux36node/192.168.47.53 labeledroot@k8s-master:/usr/local/haproxy_exporter# kubectl label nodes 192.168.47.53 test_label=testnode/192.168.47.53 labeled
4.3.yaml引用node label范例
kind: DeploymentapiVersion: extensions/v1beta1metadata: labels: app: linux36-tomcat-app1-deployment-label name: linux36-tomcat-app1-deployment namespace: linux36spec: replicas: 1 selector: matchLabels: app: linux36-tomcat-app1-selector template: metadata: labels: app: linux36-tomcat-app1-selector spec: containers: - name: linux36-tomcat-app1-container image: harbor.gesila.com/k8s/tomcat-app1:v1 #command: ["/apps/tomcat/bin/run_tomcat.sh"] #imagePullPolicy: IfNotPresent imagePullPolicy: Always ports: - containerPort: 8080 protocol: TCP name: env: - name: "password" value: "123456" - name: "age" value: "18" resources: limits: cpu: 2 memory: "2048Mi" requests: cpu: 500m memory: "1024Mi" volumeMounts: - name: linux36-images mountPath: /data/tomcat/webapps/myapp/images readOnly: false - name: linux36-static mountPath: /data/tomcat/webapps/myapp/static readOnly: false volumes: - name: linux36-images nfs: server: 192.168.47.47 path: /data/k8sdata/linux36/images - name: linux36-static nfs: server: 192.168.47.47 path: /data/k8sdata/linux36/static nodeSelector: #位置在当前containers参数结束后的部分 project: linux36 #指定的label标签 ---kind: ServiceapiVersion: v1metadata: labels: app: linux36-tomcat-app1-service-label name: linux36-tomcat-app1-service namespace: linux36spec: type: NodePort ports: - name: port: 80 protocol: TCP targetPort: 8080 nodePort: 30003 selector: app: linux36-tomcat-app1-selector
4.4.删除自定义node label
root@k8s-master:/usr/local/prometheus# kubectl label nodes 192.168.47.53 test_label-node/192.168.47.53 labeled
5.业务镜像版本升级及回滚
在指定的deployment中通过kubectl set image指定新版本的 镜像:tag 来实现更新代码的目的
做三个镜像
root@k8s-master:~/images/k8s-tomcat/nginx-tomcat# kubectl apply -f nginx.yaml --record=truedeployment.extensions/linux36-nginx-deployment configuredservice/linux36-nginx-service configured#--record=true为记录执行的kubectl历史命令;版本恢复需要使用到
5.1.升级到镜像到指定版本
查看
root@k8s-master:~/images/k8s-tomcat/nginx-tomcat# kubectl get deployment -n linux36NAME READY UP-TO-DATE AVAILABLE AGElinux36-nginx-deployment 1/1 1 1 23hlinux36-tomcat-app1-deployment 1/1 1 1 23hroot@k8s-master:~/images/k8s-tomcat/nginx-tomcat# kubectl get pod -n linux36NAME READY STATUS RESTARTS AGElinux36-nginx-deployment-598cb57658-5s95p 1/1 Running 0 4m47slinux36-tomcat-app1-deployment-74c7768479-877fm 1/1 Running 1 23h
镜像更新命令格式为
kubectl set image deployment/deployment-name containers-name=image -n namespace
升级到v6
kubectl set image deployment/linux36-nginx-deployment linux36-nginx-container=harbor.gesila.com/k8s/nginx-web1:v6 -n linux36
升级到v7
kubectl set image deployment/linux36-nginx-deployment linux36-nginx-container=harbor.gesila.com/k8s/nginx-web1:v7 -n linux36
5.2.查看历史版本信息
kubectl rollout history deployment/linux36-nginx-deployment -n linux36
5.3.回滚到上一个版本
kubectl rollout undo deployment/linux36-nginx-deployment -n linux36回滚到上一个版本,如果是连续执行的话,效果是:比如:当前版本为v6,历史记录为3升级到v7 历史记录为4回滚到上版本v6,历史记录变为5再次回滚,版本变为v7,历史记录为6再次回滚,版本变为v6,历史记录为7 ....所以连续执行回滚,只能在最近的两个版本间切换
5.4.回滚到指定版本
kubectl rollout undo deployment/linux36-nginx-deployment --to-revision=2 -n linux36
6.配置主机为封锁状态且不参与调度
root@k8s-master:/usr/local/prometheus# kubectl --help | grep cordon cordon Mark node as unschedulable #标记为警戒,即不参加pod调度 uncordon Mark node as schedulable #去掉警戒,即参加pod调度
设置不参加调度
root@k8s-master:/usr/local/prometheus# kubectl cordon 192.168.47.53node/192.168.47.53 cordoned
设置参与调度
root@k8s-master:/usr/local/prometheus# kubectl uncordon 192.168.47.53node/192.168.47.53 uncordoned
7.从etcd删除pod
适用于自动化场景
7.1.查看和namespace相关的数据
ETCDCTL_API=3 etcdctl get /registry/ --prefix --keys-only | grep linux36
7.2.从etcd查看具体某个对象的数据
ETCDCTL_API=3 etcdctl get /registry/pods/linux36/linux36-nginx-deployment-6d858d49d-2l6pd
7.3.删除etcd指定资源
root@k8s-etcd2:~# ETCDCTL_API=3 etcdctl del /registry/pods/linux36/linux36-nginx-deployment-6d858d49d-2l6pd1
返回值为1表示命令执行成功返回值为0表示命令执行失败
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~