- Hadoop MapReduce v2 Cookbook(Second Edition)
- Thilina Gunarathne
- 527字
- 2021-07-23 20:32:53
Benchmarking Hadoop MapReduce using TeraSort
Hadoop TeraSort is a well-known benchmark that aims to sort 1 TB of data as fast as possible using Hadoop MapReduce. TeraSort benchmark stresses almost every part of the Hadoop MapReduce framework as well as the HDFS filesystem making it an ideal choice to fine-tune the configuration of a Hadoop cluster.
The original TeraSort benchmark sorts 10 million 100 byte records making the total data size 1 TB. However, we can specify the number of records, making it possible to configure the total size of data.
Getting ready
You must set up and deploy HDFS and Hadoop v2 YARN MapReduce prior to running these benchmarks, and locate the hadoop-mapreduce-examples-*.jar
file in your Hadoop installation.
How to do it...
The following steps will show you how to run the TeraSort benchmark on the Hadoop cluster:
- The first step of the TeraSort benchmark is the data generation. You can use the
teragen
command to generate the input data for the TeraSort benchmark. The first parameter ofteragen
is the number of records and the second parameter is the HDFS directory to generate the data. The following command generates 1 GB of data consisting of 10 million records to thetera-in
directory in HDFS. Change the location of thehadoop-mapreduce-examples-*.jar
file in the following commands according to your Hadoop installation:$ hadoop jar \ $HADOOP_HOME/share/Hadoop/mapreduce/hadoop-mapreduce-examples-*.jar \ teragen 10000000 tera-in
Tip
It's a good idea to specify the number of Map tasks to the
teragen
computation to speed up the data generation. This can be done by specifying the–Dmapred.map.tasks
parameter.Also, you can increase the HDFS block size for the generated data so that the Map tasks of the TeraSort computation would be coarser grained (the number of Map tasks for a Hadoop computation typically equals the number of input data blocks). This can be done by specifying the
–Ddfs.block.size
parameter.$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar \ teragen –Ddfs.block.size=536870912 \ –Dmapred.map.tasks=256 10000000 tera-in
- The second step of the TeraSort benchmark is the execution of the TeraSort MapReduce computation on the data generated in step 1 using the following command. The first parameter of the
terasort
command is the input of HDFS data directory, and the second part of theterasort
command is the output of the HDFS data directory.$ hadoop jar \ $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar \ terasort tera-in tera-out
Tip
It's a good idea to specify the number of Reduce tasks to the TeraSort computation to speed up the Reducer part of the computation. This can be done by specifying the
–Dmapred.reduce.tasks
parameter as follows:$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar terasort –Dmapred.reduce.tasks=32 tera-in tera-out
- The last step of the TeraSort benchmark is the validation of the results. This can be done using the
teravalidate
application as follows. The first parameter is the directory with the sorted data and the second parameter is the directory to store the report containing the results.$ hadoop jar \ $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar \ teravalidate tera-out tera-validate
How it works...
TeraSort uses the sorting capability of the MapReduce framework together with a custom range Partitioner to divide the Map output among the Reduce tasks ensuring the global sorted order.
- TypeScript Blueprints
- Learning Data Mining with Python
- Python高級機器學習
- Getting Started with Python Data Analysis
- 零基礎學Python網絡爬蟲案例實戰全流程詳解(入門與提高篇)
- 用案例學Java Web整合開發
- 愛上C語言:C KISS
- 軟件測試分析與實踐
- 你好!Java
- 關系數據庫與SQL Server 2012(第3版)
- 劍指大數據:企業級電商數據倉庫項目實戰(精華版)
- Eclipse開發(學習筆記)
- Learning Scrapy
- 微信小程序開發圖解案例教程:附精講視頻(第3版)
- Implementing OpenShift