Hadoop-3.1.3高可用集群部署

网友投稿 346 2022-11-20

Hadoop-3.1.3高可用集群部署

1. 部署前提

操作用户:hadoop

操作目录:/home/hadoop/apps

操作机器(3台):hadoop1,hadoop2,hadoop3

3台机器都是Centos7,安装了JDK1.8和一个已启动的Zookeeper-3.5.7集群。

以下是添加用户、配置免密登录,关闭防火墙等操作:

# 3台机器都需进行如下配置# 新增hadoop用户useradd hadooppasswd hadoopNew password: Retype new password:授权 root 权限,在root下面加一条hadoop的hadoop ALL=(ALL) ALL#修改权限chmod 777 /etc/sudoersvim /etc/sudoers## Allow root to run any commands anywhere root ALL=(ALL) ALLhadoop ALL=(ALL) ALL#恢复权限chmod 440 /etc/sudoers# 配置免密登录#进入到我的home目录,su - hadoopssh-keygen -t rsa (连续按四个回车)#执行完这个命令后,会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥)#将公钥拷贝到要免密登录的机器上ssh-copy-id hadoop2ssh-copy-id hadoop3# 主机名ip地址映射sudo vim /etc/hosts192.168.62.161 hadoop1192.168.62.162 hadoop2192.168.62.163 hadoop3# 关闭防火墙sudo systemctl stop firewalldsudo systemctl disable firewalld# 禁用selinuxsudo vim /etc/selinux/configSELINUX=enforcing --> SELINUX=disabled# 配置时间同步# 安装ntpdate工具sudo yum -y install ntp ntpdate # 设置系统时间与网络时间同步ntpdate 0.asia.pool.ntp.org# 将系统时间写入硬件时间hwclock --systohc

2. 集群节点规划

此集群中Hadoop的NameNode是HA的,Yarn的ResourceManger也是HA的,从而保证了Hadoop集群的高可用。

3. 下载

在hadoop1机器上执行

wget -zxvf hadoop-3.1.3.tar.gz

5. 修改环境变量

# 添加环境变量sudo vim /etc/profileexport HADOOP_HOME=/home/hadoop/apps/hadoop-3.1.3export PATH=$PATH:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin# 刷新配置source /etc/profile

6. 修改配置文件

# 配置文件所在目录cd /home/hadoop/apps/hadoop-3.1.3/etc/hadoop

6.1. 修改hadoop-env.sh

vim hadoop-env.sh# Set Hadoop-specific environment variables here.#指定JAVA_HOMEexport JAVA_HOME=/opt/jdk1.8.0_212#指定hadoop用户,hadoop3.x之后必须配置(我的用户名就叫hadoop)export HDFS_NAMENODE_USER=hadoopexport HDFS_DATANODE_USER=hadoopexport HDFS_ZKFC_USER=hadoopexport HDFS_JOURNALNODE_USER=hadoopexport YARN_RESOURCEMANAGER_USER=hadoopexport YARN_NODEMANAGER_USER=hadoop

6.2. 修改core-site.xml

vim core-site.xml fs.defaultFS hdfs://ns1 hadoop.tmp.dir /home/hadoop/data/hadoop_tmp_data hadoop. hadoop ha.zookeeper.quorum hadoop1:2181,hadoop2:2181,hadoop3:2181 hadoop.proxyuser.hadoop.hosts * hadoop.proxyuser.hadoop.groups *

6.3. 修改hdfs-site.xml

vim hdfs-site.xml dfs.nameservices ns1 dfs.ha.namenodes.ns1 nn1,nn2 dfs.namenode.rpc-address.ns1.nn1 hadoop1:8020 dfs.namenode.rpc-address.ns1.nn2 hadoop2:8020 dfs.namenode.shared.edits.dir qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/ns1 dfs.client.failover.proxy.provider.ns1org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence shell(/bin/true) dfs.ha.automatic-failover.enabled true dfs.ha.fencing.ssh.private-key-files /home/hadoop/.ssh/id_rsa dfs.journalnode.edits.dir /home/hadoop/data/journalnode_data

6.4. 修改yarn-site.xml

vim yarn-site.xml yarn.resourcemanager.ha.enabled true Enable RM high-availability yarn.resourcemanager.cluster-id yarn_cluster1 Name of the cluster yarn.resourcemanager.ha.rm-ids rm1,rm2 The list of RM nodes in the cluster when HA is enabled yarn.resourcemanager.hostname.rm1 hadoop1 The hostname of the rm1 yarn.resourcemanager.hostname.rm2 hadoop2 The hostname of the rm2 yarn.resourcemanager.webapp.address.rm1 hadoop1:8088 yarn.resourcemanager.webapp.address.rm2 hadoop2:8088 yarn.resourcemanager.zk-address hadoop1:2181,hadoop2:2181,hadoop3:2181 yarn.resourcemanager.store.class org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore yarn.resourcemanager.recovery.enabled true yarn.resourcemanager.ha.automatic-failover.enabled true yarn.resourcemanager.ha.automatic-failover.embedded true yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce_shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.nodemanager.env-whitelist JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME yarn.nodemanager.vmem-check-enabled false Whether virtual memory limits will be enforced for containers yarn.nodemanager.vmem-pmem-ratio 6 Ratio between virtual memory to physical memory when setting memory limits for containers

6.5. 修改workers文件

vim workershadoop1hadoop2hadoop3

6.6. 修改mapred-site.xml

vim mapred-site.xml mapreduce.framework.name yarn

7. 分发目录

# 将配置好的hadoop-3.1.3目录分发到hadoop2,hadoop3机器中scp -r /home/hadoop/apps/hadoop-3.1.3 hadoop2:/home/hadoop/apps/scp -r /home/hadoop/apps/hadoop-3.1.3 hadoop3:/home/hadoop/apps/

在hadoop2,hadoop3机器上执行步骤5操作,即配置环境变量。

8. 启动集群

#启动journalnode,在hadoop1,hadoop2,hadoop3上分别执行hdfs --daemon start journalnode# 格式化namenode,在hadoop1或者hadoop2上执行,我这在hadoop1上执行了hdfs namenode -format# 启动namenode,在hadoop1上执行hdfs --daemon start namenode#在hadoop2上同步namenode数据后启动,在hadoop2上执行hdfs namenode -bootstrapStandbyhdfs --daemon start namenode# 在hadoop1上执行hdfs zkfc -formatZK# 至此,hadoop的hdfs服务已经启动完成。yarn服务启动可执行命令:start-yarn.sh# 重启hdfs stop-dfs.shstart-dfs.sh# 重启yarnstop-yarn.shstart-yarn.sh# 直接同时重启hdfs和yarnstop-all.shstart-all.sh

9. 查看服务状态

9.1. 查看进程

我写了jpsall.sh脚本直接查看3台集群的进程,也可以单台机器分别执行jps命令查看。

cat jspall.sh#!/bin/bash# 执行jsp命令查看每台服务器上的节点状态for i in hadoop1 hadoop2 hadoop3do echo "=====${i}所有服务=====" ssh hadoop@$i '/opt/jdk1.8.0_212/bin/jps'donesh jspall.sh

=====192.168.62.161所有服务=====11616 JournalNode11041 QuorumPeerMain11797 DFSZKFailoverController12296 NodeManager12169 ResourceManager11386 DataNode11261 NameNode29645 Jps=====192.168.62.162所有服务=====50000 NameNode50196 JournalNode50471 ResourceManager50296 DFSZKFailoverController49897 QuorumPeerMain50553 NodeManager50076 DataNode57758 Jps=====192.168.62.163所有服务=====25748 JournalNode25546 QuorumPeerMain25643 DataNode25867 NodeManager29083 Jps

9.2. 查看web页面

Hadoop的web页面:​​​​​http://hadoop1:9870​​​​​

Yarn的web页面:​​​​​http://hadoop1:8088​​​​​

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:Java 数据库连接池DBPool 介绍
下一篇:轻薄本的最佳伴侣-创基Type-C分线器
相关文章

 发表评论

暂时没有评论,来抢沙发吧~