CDH-Namenode-Yarn-Hbase-Hive的HA配置-Flink on yarn配置

网友投稿 240 2022-11-25

CDH-Namenode-Yarn-Hbase-Hive的HA配置-Flink on yarn配置

配置namenode HA高可用使用cloudera manager方式安装,namenode是单节点方式,需要额外配置成HA。配置NameNode HA的两点前提条件:(1)至少是3个或3个以上奇数个JournalNode,否则将无法继续配置NameNode HA.(2)配置NameNode需要有Zookeeper.在hdfs - 操作 -选择启用HA

Yarn HA功能可用性测试

向集群提交一个PI作业

hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 100 10000

打开 hive -> Configuration -> Category -> Advanced搜索"HiveServer2 Advanced Configuration Snippet (Safety Valve) for hive-site.xml"增加配置

hive.server2.support.dynamic.service.discovery true

下面可以不用设置,默认会直接在zookeeper创建/hiveserver2,如果多个hive集群共用同一个zookeeper就需要单独设置)

hive.server2.zookeeper.namespace hiveserver2_zk

重启 hiveserver2 服务并注册到 zookeeper.

验证zookeeper是否有hiveserver2信息zkCli.sh

[zk: localhost:2181(CONNECTED) 1] ls /hiveserver2 [serverUri=node2:10000;version=1.2.1.2.3.0.0-2557;sequence=0000000016, serverUri=node1:10000;version=1.2.1.2.3.0.0-2557;sequence=0000000015] [zk: localhost:2181(CONNECTED) 2] quit Quitting... 2019-03-01 10:56:00,543 - INFO [main:ZooKeeper@684] - Session: 0x1532d60dbea0006 closed 2019-03-01 10:56:00,543 - INFO [main-EventThread:ClientCnxn$EventThread@512] - EventThread shut down [root@node1 /]#

5、 测试验证:

jdbc:hive2:///;serviceDiscoveryMode=zookeeper;zookeeperNamespace=hiveserver2 e.g. jdbc:hive2://node1:2181,node2:2181,host3:2181/;serviceDiscoveryMode=zookeeper;zookeeperNamespace=hiveserver2

FLINK ON YARN 配置

[flink@node10 conf]$ cat flink-conf.yaml #基础配置 jobmanager.rpc.address: node10 jobmanager.rpc.port: 6123 jobmanager.heap.size: 1024m taskmanager.heap.size: 1024m taskmanager.numberOfTaskSlots: 1 parallelism.default: 1 #故障恢复策略 jobmanager.execution.failover-strategy: region # 配置 HistoryServer jobmanager.archive.fs.dir: hdfs:///flink/completed-jobs/ historyserver.archive.fs.dir: hdfs:///flink/completed-jobs/ historyserver.archive.fs.refresh-interval: 10000 #historyserver.web.address: localhost #historyserver.web.port: 8082 web.port: 8081 #容错和检查点配置 state.backend: filesystem state.checkpoints.dir: hdfs://flink/flink-checkpoints state.savepoints.dir: hdfs://flink/flink-savepoints # 指定使用 zookeeper 进行 HA 协调 high-availability: zookeeper high-availability.storageDir: hdfs:///flink/ha/ high-availability.zookeeper.quorum: node10:2181,node11:2181,node12:2181 high-availability.zookeeper.path.root: /flink yarn.application-attempts: 2 fs.hdfs.hadoopconf: /etc/hadoop/conf env.log.dir: /var/log/flink [flink@node10 conf]$

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:测试springboot项目出现Test Ignored的解决
下一篇:USB辐射整改案例
相关文章

 发表评论

暂时没有评论,来抢沙发吧~