c语言sscanf函数的用法是什么
252
2022-11-20
Hadoop之——WorldCount统计实例
最近,有很多想做大数据的同学发来私信,想请我这位在大数据领域跌打滚爬了多年的老鸟写一些大数据分析的文章,好作为这些同学学习大数据分析从入门到上手再到精通的参考教程,作为一个大数据分析领域的老鸟,很高兴自己在业界得到了很多同行的认可,同时,自己也想将多年来做大数据分析的一些经验和心得分享给大家。那么,今天,就给大家带来一篇Hadoop的入门经典——WordCount统计实例。
一、准备工作
1、Hadoop安装
(1) 伪分布式安装
请参考博文:《Hadoop之——Hadoop2.4.1伪分布搭建》
(2) 集群安装
请参考博文《Hadoop之——CentOS + hadoop2.5.2分布式环境配置》
(3) 高可用集群安装
请参考博文《Hadoop之——Hadoop2.5.2 HA高可靠性集群搭建(Hadoop+Zookeeper)前期准备》和《Hadoop之——Hadoop2.5.2 HA高可靠性集群搭建(Hadoop+Zookeeper)》
2、Eclipse配置
本实例中所有的代码开发和运行都是在Eclipse中进行的,大家可以参考博文《Hadoop之——windows7+eclipse+hadoop2.5.2环境配置》对自己的Eclispe进行相关的配置,以达到在Eclipse中直接运行本实例以及后续Hadoop实例的效果。
二、程序开发
1、统计单词数量的WCMapper类
package com.lyz.hdfs.mr.worldcount;import java.io.IOException;import org.apache.commons.lang.StringUtils;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Mapper;/** * 统计单词数量的Mapper * KEYIN:map输入的key,代表当前输入文本的偏移量 * VALUEIN:当前一行文本 * KEYOUT:一个单词 * VALUEOUT:单此每次统计的数据,此示例中就是1 * @author liuyazhuang * */public class WCMapper extends Mapper
2、统计单词数量的Reducer类
package com.lyz.hdfs.mr.worldcount;import java.io.IOException;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Reducer;/** * 统计单词数量的reducer类 * KEYIN:当前的一个单词 * VALUEIN:map中输入过来的单词数量 * KEYOUT:当前的一个单词 * VALUEOUT:单词出现的总次数 * @author liuyazhuang * */public class WCReducer extends Reducer
3、运行程序的入口WCRunner类
package com.lyz.hdfs.mr.worldcount;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.conf.Configured;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.Tool;import org.apache.hadoop.util.ToolRunner;/** * 运行统计单词数量的MR程序 * @author liuyazhuang * */public class WCRunner extends Configured implements Tool{ public static void main(String[] args) throws Exception{ ToolRunner.run(new Configuration(), new WCRunner(), args); } @Override public int run(String[] arg0) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf); job.setJarByClass(WCRunner.class); job.setMapperClass(WCMapper.class); job.setReducerClass(WCReducer.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(LongWritable.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(LongWritable.class); FileInputFormat.setInputPaths(job, new Path("D:/hadoop_data/wordcount/src.txt")); FileOutputFormat.setOutputPath(job, new Path("D:/hadoop_data/wordcount/dest")); return job.waitForCompletion(true) ? 0 : 1; }}
三、运行程序
在Eclipse中直接右键类WCRunner, Run As——> Java Application,控制台输出结果如下:
2017-10-14 23:52:51,865 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1019)) - session.id is deprecated. Instead, use dfs.metrics.session-id2017-10-14 23:52:51,868 INFO [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=2017-10-14 23:52:52,665 WARN [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(150)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.2017-10-14 23:52:52,669 WARN [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(259)) - No job jar file set. User classes may not be found. See Job or Job#setJar(String).2017-10-14 23:52:52,675 INFO [main] input.FileInputFormat (FileInputFormat.java:listStatus(281)) - Total input paths to process : 12017-10-14 23:52:52,713 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:12017-10-14 23:52:52,788 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_local994420281_00012017-10-14 23:52:52,820 WARN [main] conf.Configuration (Configuration.java:loadProperty(2368)) - file:/tmp/hadoop-liuyazhuang/mapred/staging/liuyazhuang994420281/.staging/job_local994420281_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.2017-10-14 23:52:52,822 WARN [main] conf.Configuration (Configuration.java:loadProperty(2368)) - file:/tmp/hadoop-liuyazhuang/mapred/staging/liuyazhuang994420281/.staging/job_local994420281_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.2017-10-14 23:52:52,908 WARN [main] conf.Configuration (Configuration.java:loadProperty(2368)) - file:/tmp/hadoop-liuyazhuang/mapred/local/localRunner/liuyazhuang/job_local994420281_0001/job_local994420281_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.2017-10-14 23:52:52,909 WARN [main] conf.Configuration (Configuration.java:loadProperty(2368)) - file:/tmp/hadoop-liuyazhuang/mapred/local/localRunner/liuyazhuang/job_local994420281_0001/job_local994420281_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.2017-10-14 23:52:52,913 INFO [main] mapreduce.Job (Job.java:submit(1289)) - The url to track the job: 23:52:52,914 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) - Running job: job_local994420281_00012017-10-14 23:52:52,915 INFO [Thread-2] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471)) - OutputCommitter set in config null2017-10-14 23:52:52,921 INFO [Thread-2] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter2017-10-14 23:52:52,956 INFO [Thread-2] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for map tasks2017-10-14 23:52:52,956 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local994420281_0001_m_000000_02017-10-14 23:52:52,982 INFO [LocalJobRunner Map Task Executor #0] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(181)) - ProcfsBasedProcessTree currently is supported only on Linux.2017-10-14 23:52:53,048 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(587)) - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@50b84eb32017-10-14 23:52:53,051 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(733)) - Processing split: file:/D:/hadoop_data/wordcount/src.txt:0+1732017-10-14 23:52:53,060 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(388)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer2017-10-14 23:52:53,089 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1182)) - (EQUATOR) 0 kvi 26214396(104857584)2017-10-14 23:52:53,089 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(975)) - mapreduce.task.io.sort.mb: 1002017-10-14 23:52:53,089 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(976)) - soft limit at 838860802017-10-14 23:52:53,089 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(977)) - bufstart = 0; bufvoid = 1048576002017-10-14 23:52:53,089 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(978)) - kvstart = 26214396; length = 65536002017-10-14 23:52:53,096 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 2017-10-14 23:52:53,097 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1437)) - Starting flush of map output2017-10-14 23:52:53,097 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1455)) - Spilling map output2017-10-14 23:52:53,097 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1456)) - bufstart = 0; bufend = 326; bufvoid = 1048576002017-10-14 23:52:53,097 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1458)) - kvstart = 26214396(104857584); kvend = 26214320(104857280); length = 77/65536002017-10-14 23:52:53,114 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1641)) - Finished spill 02017-10-14 23:52:53,122 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1001)) - Task:attempt_local994420281_0001_m_000000_0 is done. And is in the process of committing2017-10-14 23:52:53,129 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - map2017-10-14 23:52:53,129 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local994420281_0001_m_000000_0' done.2017-10-14 23:52:53,129 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) - Finishing task: attempt_local994420281_0001_m_000000_02017-10-14 23:52:53,129 INFO [Thread-2] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete.2017-10-14 23:52:53,132 INFO [Thread-2] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for reduce tasks2017-10-14 23:52:53,132 INFO [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(302)) - Starting task: attempt_local994420281_0001_r_000000_02017-10-14 23:52:53,138 INFO [pool-3-thread-1] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(181)) - ProcfsBasedProcessTree currently is supported only on Linux.2017-10-14 23:52:53,178 INFO [pool-3-thread-1] mapred.Task (Task.java:initialize(587)) - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@1c3a583d2017-10-14 23:52:53,182 INFO [pool-3-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@4a7f319c2017-10-14 23:52:53,192 INFO [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:
四、附录
本实例所用的输入文件为src.txt,内容如下所示:
dysdgy ubdh shdhssusdfy sdusf duyfufuyfuyfys sydfyusd sydufyusdhfdf fyudyfu dyuefyuedfhusf fyueyf dyiefyu sudiufiliuyazhuangliuyazhuangliuyazhuangliuyazhuang
输出的结果文件为part-r-00000,内容如下所示:
dfhusf 1dhfdf 1duyfu 1dyiefyu 1dysdgy 1dyuefyue 1fuyfuyfys 1fyudyfu 1fyueyf 1liuyazhuang 4sdusf 1shdh 1ssusdfy 1sudiufi 1sydfyusd 1sydufyus 1ubdh 1
至此,基于Hadoop的统计单词数量的MapReduce程序开发完成。
五、温馨提示
大家如果在开发过程中遇到了问题,请参考Hadoop专栏中的其他相关博文解决相关问题。
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~