官术网_书友最值得收藏!

Summary

In this chapter, you saw optimizations at different stages of the Hadoop MapReduce pipeline. With the join example, we saw a few other advanced features available for MapReduce jobs. Some key takeaways from this chapter are as follows:

  • Too many Map tasks that are I/O bound should be avoided. Inputs dictate the number of Map tasks.
  • Map tasks are primary contributors for job speedup due to parallelism.
  • Combiners increase efficiency not only in data transfers between Map tasks and Reduce tasks, but also reduce disk I/O on the Map side.
  • The default setting is a single Reduce task.
  • Custom partitioners can be used for load balancing among Reducers.
  • DistributedCache is useful for side file distribution of small files. Too many and too large files in the cache should be avoided.
  • Custom counters should be used to track global job level statistics. But too many counters are bad.
  • Compression should be used more often. Different compression techniques have different tradeoffs and the right technique is application-dependent.
  • Hadoop has many tunable configuration knobs to optimize job execution.
  • Premature optimizations should be avoided. Built-in counters are your friends.
  • Higher-level abstractions such as Pig or Hive are recommended instead of bare metal Hadoop jobs.

In the next chapter, we will look at Pig, a framework to script MapReduce jobs on Hadoop. Pig provides higher-level relational operators that a user can employ to do data transformations, eliminating the need to write low-level MapReduce Java code.

主站蜘蛛池模板: 开江县| 凉城县| 呼和浩特市| 云阳县| 兴文县| 潢川县| 汉沽区| 凌云县| 法库县| 皋兰县| 广平县| 抚州市| 元阳县| 集安市| 绍兴县| 来宾市| 龙陵县| 永年县| 定远县| 岳阳县| 潼关县| 彰化县| 贺兰县| 迁安市| 绥中县| 余姚市| 新河县| 凯里市| 湘乡市| 济阳县| 堆龙德庆县| 正宁县| 保亭| 柳江县| 黑山县| 巴马| 富顺县| 陆川县| 调兵山市| 宜都市| 鄂托克旗|