官术网_书友最值得收藏!

Time for action – WordCount the easy way

Let's revisit WordCount, but this time use some of these predefined map and reduce implementations:

  1. Create a new WordCountPredefined.java file containing the following code:
    import org.apache.hadoop.conf.Configuration ;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    import org.apache.hadoop.mapreduce.lib.map.TokenCounterMapper ;
    import org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer ;
    
    public class WordCountPredefined
    {   
        public static void main(String[] args) throws Exception
        {
            Configuration conf = new Configuration();
            Job job = new Job(conf, "word count1");
            job.setJarByClass(WordCountPredefined.class);
            job.setMapperClass(TokenCounterMapper.class);
            job.setReducerClass(IntSumReducer.class);
            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(IntWritable.class);
            FileInputFormat.addInputPath(job, new Path(args[0]));
            FileOutputFormat.setOutputPath(job, new Path(args[1]));
            System.exit(job.waitForCompletion(true) ? 0 : 1);
        }
    }
  2. Now compile, create the JAR file, and run it as before.
  3. Don't forget to delete the output directory before running the job, if you want to use the same location. Use the hadoop fs -rmr output, for example.

What just happened?

Given the ubiquity of WordCount as an example in the MapReduce world, it's perhaps not entirely surprising that there are predefined Mapper and Reducer implementations that together realize the entire WordCount solution. The TokenCounterMapper class simply breaks each input line into a series of (token, 1) pairs and the IntSumReducer class provides a final count by summing the number of values for each key.

There are two important things to appreciate here:

  • Though WordCount was doubtless an inspiration for these implementations, they are in no way specific to it and can be widely applicable
  • This model of having reusable mapper and reducer implementations is one thing to remember, especially in combination with the fact that often the best starting point for a new MapReduce job implementation is an existing one
主站蜘蛛池模板: 鱼台县| 马鞍山市| 白城市| 株洲市| 大庆市| 沂源县| 德惠市| 都兰县| 桂东县| 高安市| 秀山| 志丹县| 乌恰县| 萍乡市| 潞城市| 江油市| 墨玉县| 蕲春县| 开远市| 昌都县| 山阴县| 海伦市| 始兴县| 广宁县| 平武县| 南乐县| 西充县| 板桥市| 长治市| 元氏县| 临猗县| 墨竹工卡县| 瓦房店市| 含山县| 绿春县| 丹寨县| 六盘水市| 仁怀市| 玛曲县| 甘洛县| 辽源市|