linux怎么查看本机内存大小
298
2022-11-15
Hadoop HDFS操作命令总结
Hadoop HDFS操作命令总结
1.列出根目录下所有的目录或文件
hadoop fs -ls /
2.列出/logs目录下的所有目录和文件
hadoop fs -ls /logs
3.列出/user目录及其子目录下的所有文件(谨慎使用)
hadoop fs -ls -R /user
4.创建/soft目录
hadoop fs -mkdir /soft
5.创建多级目录
hadoop fs -mkdir -p /apps/windows/2017/01/01
6.将本地的wordcount.jar文件上传到/wordcount目录下
hadoop fs -put wordcount.jar /wordcount
7.下载words.txt文件到本地
hadoop fs -get /words.txt
8.将/stu/students.txt文件拷贝到本地
hadoop fs -copyToLocal /stu/students.txt
9.将word.txt文件拷贝到/wordcount/input/目录
hadoop fs -copyFromLocal word.txt /wordcount/input
10.将word.txt文件从本地移动到/wordcount/input/目录下
hadoop fs -moveFromLocal word.txt /wordcount/input/
11.将/stu/students.txt拷贝一份为/stu/students.txt.bak
hadoop fs -cp /stu/students.txt /stu/students.txt.bak
12.将/flume/tailout/目录下的子目录或文件都拷贝到/logs目录(如果此目录不存在会创建)下
hadoop fs -cp /flume/tailout/ /logs
13.将/word.txt文件重命名为/words.txt
hadoop fs -mv /word.txt /words.txt
14.将/words.txt文件移动到/wordcount/input/目录下
hadoop fs -mv /words.txt /wordcount/input/
15.将/ws目录以及子目录和文件都删除(谨慎使用)
hadoop fs -rm -r /ws
16.删除以"xbs-"开头的目录及其子目录
hadoop fs -rm -r /xbs-*
17.将/wordcount/output2/目录下的a.txt文件删除
hadoop fs -rm /wordcount/output2/a.txt
18.将/wordcount/input/目录下面的所有文件都删除
hadoop fs -rm /wordcount/input/*
19.查看HDFS集群的磁盘空间使用情况
hadoop fs -df -h
20.查看/word.txt文件的内容
hadoop fs -cat /word.txt
21.将name.txt文件中的内容添加到/wordcount/input/words.txt文件中
hadoop fs -appendToFile name.txt /wordcount/input/words.txt
22.动态查看/wordcount/input/words.txt文件的内容
hadoop fs -tail -f
23.统计/flume目录总大小
hadoop fs -du -s -h /flume
24.分别统计/flume目录下各个子目录(或文件)大小
hadoop fs -du -s -h /flume/*
25.运行jar包中的程序
//hadoop jar + 要执行的jar包 + 要运行的类 + 输入目录 + 输出目录hadoop jar wordcount.jar com.xuebusi.hadoop.mr.WordCountDriver /wordcount/input /wordcount/out
26.查看hdfs集群状态
hdfs dfsadmin -report
[root@hadoop03 apps]# hdfs dfsadmin -reportConfigured Capacity: 55737004032 (51.91 GB)Present Capacity: 15066578944 (14.03 GB)DFS Remaining: 14682021888 (13.67 GB)DFS Used: 384557056 (366.74 MB)DFS Used%: 2.55%Under replicated blocks: 7Blocks with corrupt replicas: 0Missing blocks: 0-------------------------------------------------Live datanodes (3):Name: 192.168.71.11:50010 (hadoop01)Hostname: hadoop01Decommission Status : NormalConfigured Capacity: 18579001344 (17.30 GB)DFS Used: 128180224 (122.24 MB)Non DFS Used: 16187543552 (15.08 GB)DFS Remaining: 2263277568 (2.11 GB)DFS Used%: 0.69%DFS Remaining%: 12.18%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Mon Jan 09 11:17:43 PST 2017Name: 192.168.71.13:50010 (hadoop03)Hostname: hadoop03Decommission Status : NormalConfigured Capacity: 18579001344 (17.30 GB)DFS Used: 128196608 (122.26 MB)Non DFS Used: 13623074816 (12.69 GB)DFS Remaining: 4827729920 (4.50 GB)DFS Used%: 0.69%DFS Remaining%: 25.98%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Mon Jan 09 11:17:41 PST 2017Name: 192.168.71.12:50010 (hadoop02)Hostname: hadoop02Decommission Status : NormalConfigured Capacity: 18579001344 (17.30 GB)DFS Used: 128180224 (122.24 MB)Non DFS Used: 10859806720 (10.11 GB)DFS Remaining: 7591014400 (7.07 GB)DFS Used%: 0.69%DFS Remaining%: 40.86%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Mon Jan 09 11:17:42 PST 2017
View Code
27.查看hadoop fs命令使用帮助
[root@hadoop01 hadoop]# hadoop fsUsage: hadoop fs [generic options] [-appendToFile
View Code
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~