k8s资源之namespace&replicaset&deployment

网友投稿 261 2022-10-31

k8s资源之namespace&replicaset&deployment

​​istio多集群探秘,部署了50次多集群后我得出的结论​​

​​istio多集群链路追踪,附实操视频​​

​​istio防故障利器,你知道几个,istio新手不要读,太难!​​

​​istio业务权限控制,原来可以这么玩​​

​​istio实现非侵入压缩,微服务之间如何实现压缩​​

​​不懂envoyfilter也敢说精通istio系列-filter-再也不用再代码里写csrf逻辑了​​

​​不懂envoyfilter也敢说精通istio系列filter​​

​​不懂envoyfilter也敢说精通istio系列-network filter-redis proxy​​

​​不懂envoyfilter也敢说精通istio系列-network filter-HttpConnectionManager​​

​​不懂envoyfilter也敢说精通istio系列-ratelimit-istio ratelimit完全手册​​

namespace:

•名称空间

•用于隔离不同的应用

•简称ns

常用命令:

•kubectl get namespaces

• kubectl describe namespace default

•kubectl create -f namespace-test.yaml

•kubectl delete namespace test

•kubectl apply -f namespace-test.yaml

•kubectl label namespace test aa=bb

•kubectl get ns -l aa=bb

•kubectl label ns test aa-

•kubectl edit ns test

•kubectl annotate ns test aa=bb

• kubectl annotate ns test aa-

•kubectl delete ns test --force --grace-period=0

•Kubectl create ns test

•kubectl get ns test -o yaml

resourceQuota:

apiVersion: v1kind: ResourceQuotametadata: name: mem-cpu-demospec: hard: requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi

kind: NamespaceapiVersion: v1metadata: name: test labels: name: test

replicaset:

•确保Pod数量:它会确保Kubernetes中有指定数量的Pod在运行,如果少于指定数量的Pod,RC就会创建新的,反之这会删除多余的,保证Pod的副本数量不变

•确保Pod健康:当Pod不健康,比如运行出错了,总之无法提供正常服务时,RC也会杀死不健康的Pod,重新创建新的

•弹性伸缩:在业务高峰或者低峰的时候,可以用过RC来动态的调整Pod数量来提供资源的利用率,当然我们也提到过如果使用HPA这种资源对象的话可以做到自动伸缩。

滚动升级:滚动升级是一种平滑的升级方式,通过逐步替换的策略,保证整体系统的稳定性

常用命令:

•kubectl create -f replicaset.yaml

• kubectl describe rs frontend

•kubectl edit rs frontend

•kubectl label rs frontend aa=bb

•Kubectl label rs frontend aa-

•Kubectl annotate rs frontend xx=yy

•Kubectl annotate rs frontend xx-

• kubectl get rs frontend -o yaml

•kubectl get rs -l aa=bb

•kubectl get rs -o wide

•kubectl scale rs frontend --replicas=4

•kubectl apply -f replicaset.yaml

• kubectl patch rs frontend –p ‘{“metadata”:{“labels”:{“xx”:”yy”}}}’

metadata.generation:

•metadata.generation 就是这个 ReplicationSet 的元配置数据被修改了多少次。这里就有个版本迭代的概念。每次我们使用 kuberctl edit 来修改 ReplicationSet 的配置文件,或者更新镜像,这个generation都会增长1,表示增加了一个版本。

metadata.ownerReferences:

•这个字段就是标注这个 ReplicaSet 的 Owner 信息、

•如果rs由deployment创建那么owner就为这个deployment

metadata.resourceVersion:

•这个 resourceVersion 就是这个资源对象当前的版本号。

apiVersion: apps/v1kind: ReplicaSetmetadata: name: frontend labels: app: guestbook tier: frontendspec: # this replicas value is default # modify it according to your case replicas: 3 selector: matchLabels: tier: frontend matchExpressions: - {key: tier, operator: In, values: [frontend]} template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: iaasfree/gb-frontend:v3 resources: requests: cpu: 10m memory: 10Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below. # value: env ports: - containerPort: 80

hpa:

apiVersion: autoscaling/v1kind: HorizontalPodAutoscalermetadata: name: frontend-hpa labels: software: apache project: frontend app: hpa version: v1spec: scaleTargetRef: apiVersion: apps/v1 kind: ReplicaSet name: frontend minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 10

deployment:

你只需要在 Deployment 中描述您想要的目标状态是什么,Deployment controller 就会帮您将 Pod 和ReplicaSet 的实际状态改变到您的目标状态。您可以定义一个全新的 Deployment 来创建 ReplicaSet 或者删除已有的 Deployment 并创建一个新的来替换。 注意:您不该手动管理由 Deployment 创建的 Replica Set,否则您就篡越了 Deployment controller 的职责!

常用命令:

•kubectl set image deployment nginx-deployment nginx=nginx:1.13

•kubectl delete -f nginx-deploy.yaml

•kubectl create -f nginx-deploy.yaml

•kubectl apply -f nginx-deploy.yaml

•kubectl edit deploy nginx-deployment

•kubectl label deploy nginx-deployment stage=test

•kubectl label deploy nginx-deployment stage-

•kubectl annotate deploy nginx-deployment anno=xx

•kubectl annotate deploy nginx-deployment anno-

•kubectl replace -f nginx-deploy.yaml

•Kubectl patch deploy nginx-deployment –p ‘{“matadata”:{“labels”:{“aa”:”bb”}}}’

•kubectl diff -f nginx-deploy.yaml

•Kubectl describe deploy nginx-deployment

•kubectl rollout history  deploy/nginx-deployment

•kubectl rollout pause deploy/nginx-deployment

•kubectl rollout resume deploy/nginx-deployment

•kubectl rollout restart deploy/nginx-deployment

•kubectl rollout status deploy/nginx-deployment

•kubectl rollout undo deploy/nginx-deployment

•kubectl rollout undo daemonset/abc --to-revision=3

•kubectl scale deploy nginx-deployment --replicas=3

•kubectl autoscale deployment foo --min=2 --max=10

•kubectl autoscale deployment foo --max=5 --cpu-percent=80

•kubectl set image deploy nginx-deployment nginx=nginx:1.17.6

•kubectl set image deploy nginx-deployment nginx=nginx:1.17.6 --record=true

apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80

hpa:

apiVersion: autoscaling/v1kind: HorizontalPodAutoscalermetadata: name: nginx-hpaspec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: nginx-deployment minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 10

rollingupdate:

apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80

recreate:

apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: strategy: type: Recreate selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80

progressDeadlineSeconds:

apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: progressDeadlineSeconds: 1 selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80

revisionHistoryLimit:

apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: revisionHistoryLimit: 2 selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:SpringCloud LoadBalancerClient 负载均衡原理解析
下一篇:k8s资源之statefulset
相关文章

 发表评论

暂时没有评论,来抢沙发吧~