【K8S运维知识汇总】第6天8:实战使用apollo分环境管理dubbo服务

网友投稿 377 2022-09-07

【K8S运维知识汇总】第6天8:实战使用apollo分环境管理dubbo服务

文章目录

​​zk环境拆分为test和prod环境​​

​​namespace 分环境,创建test 和prod​​​​数据库进行拆分,因实验资源有限,故使用分库的形式模拟分环境​​

​​分别创建修改两个环境的资源配置文件:​​

​​cm.yaml 修改ns,数据库库名,eureka地址​​​​dp.yaml​​​​svc.yaml​​​​ingress.yaml​​

​​接下来部署apollo-adminservice​​

​​cm.yaml​​​​dp.yaml​​

​​接下来部署prod环境的apollo-configservice,还是一样的套路​​

​​cm.yaml​​​​dp.yaml​​​​svc.yaml​​​​ingress.yaml​​

​​修改apollo-adminservice的资源配置清单​​

​​cm.yaml​​​​dp.yaml​​

​​打开portal.od.com验证,并且创建两个项目​​​​接下来交付dubbo服务分环境交付​​

​​dp.yaml​​

​​接下来交付dubbo-demo-consumer​​

​​dp.yaml​​​​svc.yaml​​​​ingress.yaml​​

​​接下来交付prod环境的dubbo-demo-server和dubbo-demo-consumer服务​​

​​dp.yaml​​

​​接下来做dubbo-demo-consumer的资源配置清单​​

​​dp.yaml​​​​svc.yaml​​​​ingress.yaml​​

​​模拟发版​​

要进行分环境,需要将现有实验环境进行拆分 其中portal服务(门户)可在各个环境共用,但apollo-adminservice和apollo-configservice必须要分开。

zk环境拆分为test和prod环境

添加dns解析:

# vi /var/named/od.com.zone

namespace 分环境,创建test 和prod

# kubectl create ns test# kubectl create ns prod

创建secret:

# kubectl create secret docker-registry harbor --docker-server=harbor.od.com --docker-username=harbor --docker-password=Harbor12345 -n test # kubectl create secret docker-registry harbor --docker-server=harbor.od.com --docker-username=harbor --docker-password=Harbor12345 -n prod

数据库进行拆分,因实验资源有限,故使用分库的形式模拟分环境

修改数据库初始化脚本,分别创建ApolloConfigTestDB和ApolloConfigProdDB

修改数据库中eureka的地址,这里用到了两个新的域名,自行在bind9中添加解析

> update ApolloConfigProdDB.ServerConfig set ServerConfig.Value="where ServerConfig.Key="eureka.service.url"; > grant INSERT,DELETE,UPDATE,SELECT on ApolloConfigProdDB.* to "apolloconfig"@"10.4.7.%" identified by "123456"; > update ApolloConfigTestDB.ServerConfig set ServerConfig.Value="where ServerConfig.Key="eureka.service.url"; > grant INSERT,DELETE,UPDATE,SELECT on ApolloConfigTestDB.* to "apolloconfig"@"10.4.7.%" identified by "123456";

修改portal数据,支持fat和pro环境:

set Value='fat,pro' where Id=1;

修改portal的cm资源配置清单:

# vi /data/k8s-yaml/apollo-portal/cm.yaml

# kubectl apply -f cd /data/k8s-yaml

# mkdir -p test/{apollo-adminservice,apollo-configservice,dubbo-demo-server,dubbo-demo-consumer} # mkdir -p prod/{apollo-adminservice,apollo-configservice,dubbo-demo-server,dubbo-demo-consumer}

将之前的资源配置清单cp到对应环境的目录中,进行修改:

# cd test/apollo-configservice/# cp ../../apollo-configservice/* ./

cm.yaml 修改ns,数据库库名,eureka地址

apiVersion: v1kind: ConfigMapmetadata: name: apollo-configservice-cm namespace: testdata: application-github.properties: | # DataSource spring.datasource.url = jdbc:mysql://mysql.od.com:3306/ApolloConfigTestDB?characterEncoding=utf8 spring.datasource.username = apolloconfig spring.datasource.password = 123456 eureka.service.url = app.properties: | appId=100003171

dp.yaml

kind: DeploymentapiVersion: extensions/v1beta1metadata: name: apollo-configservice namespace: test

svc.yaml

kind: ServiceapiVersion: v1metadata: name: apollo-configservice namespace: test

ingress.yaml

kind: IngressapiVersion: extensions/v1beta1metadata: name: apollo-configservice namespace: test

服务已经注册进来了

接下来部署apollo-adminservice

修改apollo-adminservice的资源配置清单:

# cd /data/k8s-yaml/test/apollo-adminservice# cp ../../apollo-adminservice/* ./

cm.yaml

apiVersion: v1kind: ConfigMapmetadata: name: apollo-adminservice-cm namespace: testdata: application-github.properties: | # DataSource spring.datasource.url = jdbc:mysql://mysql.od.com:3306/ApolloConfigTestDB?characterEncoding=utf8 spring.datasource.username = apolloconfig spring.datasource.password = 123456 eureka.service.url = app.properties: | appId=100003172

dp.yaml

kind: DeploymentapiVersion: extensions/v1beta1metadata: name: apollo-adminservice namespace: test

应用资源配置清单:

# kubectl apply -f kubectl apply -f cd ../../prod/apollo-configservice/# cp ../../apollo-configservice/* ./

修改资源配置清单:

cm.yaml

apiVersion: v1kind: ConfigMapmetadata: name: apollo-configservice-cm namespace: proddata: application-github.properties: | # DataSource spring.datasource.url = jdbc:mysql://mysql.od.com:3306/ApolloConfigProdDB?characterEncoding=utf8 spring.datasource.username = apolloconfig spring.datasource.password = 123456 eureka.service.url = app.properties: | appId=100003171

dp.yaml

kind: DeploymentapiVersion: extensions/v1beta1metadata: name: apollo-configservice namespace: prod labels: name: apollo-configservicespec: replicas: 1 selector: matchLabels: name: apollo-configservice template: metadata: labels: app: apollo-configservice name: apollo-configservice spec: volumes: - name: configmap-volume configMap: name: apollo-configservice-cm containers: - name: apollo-configservice image: harbor.od.com/infra/apollo-configservice:v1.5.1 ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: configmap-volume mountPath: /apollo-configservice/config terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600

svc.yaml

kind: ServiceapiVersion: v1metadata: name: apollo-configservice namespace: prodspec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: apollo-configservice

ingress.yaml

kind: IngressapiVersion: extensions/v1beta1metadata: name: apollo-configservice namespace: prodspec: rules: - host: config-prod.od.com paths: - path: / backend: serviceName: apollo-configservice servicePort: 8080

应用资源配置清单:

# kubectl apply -f kubectl apply -f kubectl apply -f kubectl apply -f cd ../apollo-adminservice/# cp ../../apollo-adminservice/* ./

cm.yaml

apiVersion: v1kind: ConfigMapmetadata: name: apollo-adminservice-cm namespace: proddata: application-github.properties: | # DataSource spring.datasource.url = jdbc:mysql://mysql.od.com:3306/ApolloConfigProdDB?characterEncoding=utf8 spring.datasource.username = apolloconfig spring.datasource.password = 123456 eureka.service.url = app.properties: | appId=100003172

dp.yaml

kind: DeploymentapiVersion: extensions/v1beta1metadata: name: apollo-adminservice namespace: prod labels: name: apollo-adminservicespec: replicas: 1 selector: matchLabels: name: apollo-adminservice template: metadata: labels: name: apollo-adminservice spec: volumes: - name: configmap-volume configMap: name: apollo-adminservice-cm containers: - name: apollo-adminservice image: harbor.od.com/infra/apollo-adminservice:v1.5.1 ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: configmap-volume mountPath: /apollo-adminservice/config terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600

应用资源配置清单:

# kubectl apply -f kubectl apply -f mysql -uroot -p> use ApolloPortalDB ;> truncate table App;> truncate table AppNamespace;

打开portal.od.com验证,并且创建两个项目

首先创建dubbo-demo-service

添加配置:两个环境都添加上:注意连接地址一个是test.od.com,一个是prod.od.com

接下来创建dubbo-demo-web项目:同样是两个环境都发布,注意一个是test.od.com,一个是prod.od.com

接下来交付dubbo服务分环境交付

同样操作,修改之前项目的资源配置清单:

# cd /data/k8s-yaml/test/dubbo-demo-server# cp ../../dubbo-server/* ./

dp.yaml

kind: DeploymentapiVersion: extensions/v1beta1metadata: name: dubbo-demo-service namespace: test labels: name: dubbo-demo-servicespec: replicas: 1 selector: matchLabels: name: dubbo-demo-service template: metadata: labels: app: dubbo-demo-service name: dubbo-demo-service spec: containers: - name: dubbo-demo-service image: harbor.od.com/app/dubbo-demo-service:apollo_191211_1916 ports: - containerPort: 20880 protocol: TCP env: - name: C_OPTS value: -Denv=fat -Dapollo.meta= - name: JAR_BALL value: dubbo-server.jar imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600

应用资源配置清单:

# kubectl apply -f cd /data/k8s-yaml/test/dubbo-demo-consumer# cp ../../dubbo-consumer/* ./

dp.yaml

kind: DeploymentapiVersion: extensions/v1beta1metadata: name: dubbo-demo-consumer namespace: test labels: name: dubbo-demo-consumerspec: replicas: 1 selector: matchLabels: name: dubbo-demo-consumer template: metadata: labels: app: dubbo-demo-consumer name: dubbo-demo-consumer spec: containers: - name: dubbo-demo-consumer image: harbor.od.com/app/dubbo-demo-web:apollo_191212_1715 ports: - containerPort: 8080 protocol: TCP - containerPort: 20880 protocol: TCP env: - name: C_OPTS value: -Denv=fat -Dapollo.meta= - name: JAR_BALL value: dubbo-client.jar imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600

svc.yaml

kind: ServiceapiVersion: v1metadata: name: dubbo-demo-consumer namespace: test

ingress.yaml

kind: IngressapiVersion: extensions/v1beta1metadata: name: dubbo-demo-consumer namespace: test

这里会用到两个新的域名,添加解析:

应用test环境的dubbo-consumer资源配置清单:

# kubectl apply -f kubectl apply -f kubectl apply -f /data/k8s-yaml/prod/dubbo-demo-servercp ../../dubbo-server/* ./

dp.yaml

kind: DeploymentapiVersion: extensions/v1beta1metadata: name: dubbo-demo-service namespace: prod labels: name: dubbo-demo-servicespec: replicas: 1 selector: matchLabels: name: dubbo-demo-service template: metadata: labels: app: dubbo-demo-service name: dubbo-demo-service spec: containers: - name: dubbo-demo-service image: harbor.od.com/app/dubbo-demo-service:apollo_191211_1916 ports: - containerPort: 20880 protocol: TCP env: - name: C_OPTS value: -Denv=pro -Dapollo.meta= - name: JAR_BALL value: dubbo-server.jar imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600

应用资源配置清单:

# kubectl apply -f /data/k8s-yaml/prod/dubbo-demo-consumer cp ../../dubbo-consumer/* ./

dp.yaml

这里可以直接使用svc资源名称调用,不用走ingress,因为svc资源只在当前namespace中生效。

kind: DeploymentapiVersion: extensions/v1beta1metadata: name: dubbo-demo-consumer namespace: prod labels: name: dubbo-demo-consumerspec: replicas: 1 selector: matchLabels: name: dubbo-demo-consumer template: metadata: labels: app: dubbo-demo-consumer name: dubbo-demo-consumer spec: containers: - name: dubbo-demo-consumer image: harbor.od.com/app/dubbo-demo-web:apollo_191212_1715 ports: - containerPort: 8080 protocol: TCP - containerPort: 20880 protocol: TCP env: - name: C_OPTS value: -Denv=pro -Dapollo.meta= - name: JAR_BALL value: dubbo-client.jar imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600

svc.yaml

kind: ServiceapiVersion: v1metadata: name: dubbo-demo-consumer namespace: prod spec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: dubbo-demo-consumer

ingress.yaml

kind: ServiceapiVersion: v1metadata: name: dubbo-demo-consumer namespace: prod spec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: dubbo-demo-consumer[root@hdss7-200 dubbo-demo-consumer]# cat ingress.yaml

应用资源配置清单:

# kubectl apply -f kubectl apply -f kubectl apply -f apply -f http://k8s-yaml.od.com/test/dubbo-demo-consumer/dp.yaml

已经成功将新代码上线到test环境,接下来上线到prod环境,

同样修改prod环境的dp.yaml,并且应用该资源配置清单:

已经上线到生产环境,这样一套完整的分环境使用apollo配置中心发布流程已经可以使用了,并且真正做到了一次构建,多平台使用

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:【K8S运维知识汇总】第6天7:dubbo服务消费者连接Apollo
下一篇:520从共情到共赢 悦己未来锁定青年圈层打造营销闭环!
相关文章

 发表评论

暂时没有评论,来抢沙发吧~