- Mastering Hadoop
- Sandeep Karanth
- 241字
- 2021-08-06 19:53:01
Summary
In this chapter, you saw optimizations at different stages of the Hadoop MapReduce pipeline. With the join example, we saw a few other advanced features available for MapReduce jobs. Some key takeaways from this chapter are as follows:
- Too many Map tasks that are I/O bound should be avoided. Inputs dictate the number of Map tasks.
- Map tasks are primary contributors for job speedup due to parallelism.
- Combiners increase efficiency not only in data transfers between Map tasks and Reduce tasks, but also reduce disk I/O on the Map side.
- The default setting is a single Reduce task.
- Custom partitioners can be used for load balancing among Reducers.
- DistributedCache is useful for side file distribution of small files. Too many and too large files in the cache should be avoided.
- Custom counters should be used to track global job level statistics. But too many counters are bad.
- Compression should be used more often. Different compression techniques have different tradeoffs and the right technique is application-dependent.
- Hadoop has many tunable configuration knobs to optimize job execution.
- Premature optimizations should be avoided. Built-in counters are your friends.
- Higher-level abstractions such as Pig or Hive are recommended instead of bare metal Hadoop jobs.
In the next chapter, we will look at Pig, a framework to script MapReduce jobs on Hadoop. Pig provides higher-level relational operators that a user can employ to do data transformations, eliminating the need to write low-level MapReduce Java code.
推薦閱讀
- Hands-On Deep Learning with Apache Spark
- Internet接入·網(wǎng)絡安全
- 西門子PLC與InTouch綜合應用
- 機器學習與大數(shù)據(jù)技術
- Expert AWS Development
- Mastering Machine Learning Algorithms
- Spark大數(shù)據(jù)技術與應用
- 嵌入式操作系統(tǒng)原理及應用
- Instant Slic3r
- 計算機辦公應用培訓教程
- Mastering DynamoDB
- Arduino創(chuàng)意機器人入門:基于Mind+
- 多媒體技術應用教程
- iLike職場大學生就業(yè)指導:C和C++方向
- 網(wǎng)絡攻防工具