Hadoop之——正常启动而无法正常关闭

网友投稿 263 2022-11-20

Hadoop之——正常启动而无法正常关闭

在1个master和2个slave节点的集群上,hadoop可以正常格式化:

hadoop@hadoop1:~/hadoop/conf$ hadoop namenode -format

13/10/21 12:02:15 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = hadoop1/192.168.1.148

STARTUP_MSG:   args = [-format]

STARTUP_MSG:   version = 1.2.1

STARTUP_MSG:   build = -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013

STARTUP_MSG:   java = 1.6.0_43

************************************************************/

13/10/21 12:02:15 INFO util.GSet: Computing capacity for map BlocksMap

13/10/21 12:02:15 INFO util.GSet: VM type       = 64-bit

13/10/21 12:02:15 INFO util.GSet: 2.0% max memory = 932118528

13/10/21 12:02:15 INFO util.GSet: capacity      = 2^21 = 2097152 entries

13/10/21 12:02:15 INFO util.GSet: recommended=2097152, actual=2097152

13/10/21 12:02:15 INFO namenode.FSNamesystem: fsOwner=hadoop

13/10/21 12:02:15 INFO namenode.FSNamesystem: supergroup=supergroup

13/10/21 12:02:15 INFO namenode.FSNamesystem: isPermissionEnabled=true

13/10/21 12:02:15 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100

13/10/21 12:02:15 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)

13/10/21 12:02:15 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0

13/10/21 12:02:15 INFO namenode.NameNode: Caching file names occuring more than 10 times

13/10/21 12:02:15 INFO common.Storage: Image file/home/hadoop/hdfs_tmp/dfs/name/current/fsimage of size 112 bytes saved in 0 seconds.

13/10/21 12:02:15 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/home/hadoop/hdfs_tmp/dfs/name/current/edits

13/10/21 12:02:15 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/home/hadoop/hdfs_tmp/dfs/name/current/edits

13/10/21 12:02:16 INFO common.Storage: Storage directory /home/hadoop/hdfs_tmp/dfs/name has been successfully formatted.

13/10/21 12:02:16 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.1.148

************************************************************/

同时可以正常启动:

hadoop@hadoop1:~/hbase/logs$ start-all.sh

starting namenode, logging to /home/hadoop/hadoop/libexec/../logs/hadoop-hadoop-namenode-hadoop1.out

hadoop3: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop3.out

hadoop2: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop2.out

hadoop1: starting secondarynamenode, logging to /home/hadoop/hadoop/libexec/../logs/hadoop-hadoop-secondarynamenode-hadoop1.out

starting jobtracker, logging to /home/hadoop/hadoop/libexec/../logs/hadoop-hadoop-jobtracker-hadoop1.out

hadoop3: starting tasktracker, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-tasktracker-hadoop3.out

hadoop2: starting tasktracker, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-tasktracker-hadoop2.out

但是无法正常停止:

​​hadoop@Hadoop1:​​ stop-all.sh

stopping jobtracker

hadoop3: stopping tasktracker

hadoop2: stopping tasktracker

stopping namenode

stopping namenode

hadoop3: no datanode to stop

hadoop2: no datanode to stop

hadoop1: stopping secondarynamenode

在各个节点上使用jps命令查看了一下java进程,发现工作节点hadoop1和hadoop2并没有DataNode这个进程。

本来用的是hadoop1.1.2和hbase0.94.8,可以正常工作。目前替换为hadoop1.2.1和hbase0.94.12,除了替换安装包,conf下面的配置基本和原来的一致,但是目前就出现了这种问题。

折腾一会后,发现原来问题出在dfs.name.dir。在重新格式化分布式目录及对应文件时,需要将NameNode及DataNode上所配置的dfs.name.dir对应的路径删掉或移除,否则hadoop无法正常工作。根据某资料所说的,这是为了避免由于格式化而删掉已有且有用的的hdfs数据,所以格式化前dfs.name.dir对应的路径应当是不存在的。

把本地环境的三个节点路径/home/hadoop/hdfs_tmp/对应的目录hdfs_tmp删掉后重新格式化,运行终于正常。

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:java图论普利姆及克鲁斯卡算法解决最小生成树问题详解
下一篇:为什么微软Surface Pro 8仍然没有配备雷电3接口?
相关文章

 发表评论

暂时没有评论,来抢沙发吧~