Kubernetes----Pod配置资源配额

网友投稿 292 2022-09-11

Kubernetes----Pod配置资源配额

一、Pod资源配额

1.1 资源配额配置简介

容器中的程序要运行,肯定是要占用一定资源的,比如CPU和内存等,如果不对某个容器的资源做限制,那么它就可能吃掉大量资源,导致气他容器无法运行,针对这种情况,kubernetes提供了对内存和CPU的资源进行配额的机制,这种机制主要通过resources选项类实现,它有两个子选项

limits:用于限制运行时容器的最大占用资源,当容器占用资源超过limits时会被终止,并进行重启requests:用于设置容器需要的最小资源,如果环境资源不够,容器就无法启动

1.2 资源配额配置

如下,编辑pod_base.yaml文件,对nginx容器设置资源上限和下限设置

apiVersion: v1kind: Namespacemetadata: name: dev---apiVersion: v1kind: Podmetadata: name: pod-resources namespace: dev labels: user: redrose2100spec: containers: - name: nginx image: nginx:1.17.1 resources: requests: cpu: "1" memory: "100M" limits: cpu: "2" memory: "512M"

使用如下命令创建pod

[root@master pod]# kubectl apply -f pod_resources.yamlnamespace/dev createdpod/pod-resources created[root@master pod]#

使用如下命令查询pod,发现此时的配额配置,环境是满足要求的,pod是能正常启动的

[root@master pod]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-resources 1/1 Running 0 7s[root@master pod]#

使用如下命令删除资源

[root@master pod]# kubectl delete -f pod_resources.yamlnamespace "dev" deletedpod "pod-resources" deleted[root@master pod]#

1.3 配额超额配置测试

这里可以做个实验,将cpu下限修改为10,上限修改为20,然后再次尝试,因为这里虚拟机的核数是4,下限修改为10后是明显不能满足要求的

apiVersion: v1kind: Namespacemetadata: name: dev---apiVersion: v1kind: Podmetadata: name: pod-resources namespace: dev labels: user: redrose2100spec: containers: - name: nginx image: nginx:1.17.1 resources: requests: cpu: "10" memory: "100M" limits: cpu: "20" memory: "512M"

然后使用如下命令创建:

[root@master pod]# kubectl apply -f pod_resources.yamlnamespace/dev createdpod/pod-resources created[root@master pod]#

再次重新创建后通过如下命令可以看到这里提示cpu不够用了

[root@master pod]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-resources 0/1 Pending 0 17m[root@master pod]# kubectl describe pod pod-resources -n devName: pod-resourcesNamespace: devPriority: 0Node: Labels: user=redrose2100Annotations: Status: PendingIP:IPs: Containers: nginx: Image: nginx:1.17.1 Port: Host Port: Limits: cpu: 20 memory: 512M Requests: cpu: 10 memory: 100M Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8kvvb (ro)Conditions: Type Status PodScheduled FalseVolumes: kube-api-access-8kvvb: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: trueQoS Class: BurstableNode-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300sEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 38s (x19 over 18m) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.[root@master pod]#

使用如下命令删除资源

[root@master pod]# kubectl delete -f pod_resources.yamlnamespace "dev" deletedpod "pod-resources" deleted[root@master pod]#

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:Kubeadm部署k8s 1.18高可用集群
下一篇:广告文案:百度在线求爱!
相关文章

 发表评论

暂时没有评论,来抢沙发吧~