官术网_书友最值得收藏!

Summary

In this chapter, we started by checking the prerequisites for how to install Hadoop and configured Hadoop in the pseudo-distributed mode. Then, we got the Elasticsearch server up and running and understood the basic configurations of Elasticsearch. We learned how to install the Elasticsearch plugins. We imported the sample file for the WordCount example to HDFS and successfully ran our first Hadoop MapReduce job that uses ES-Hadoop to get the data to Elasticsearch. Then we learned how to use the Head and Marvel plugins to explore documents in Elasticsearch.

With our environment and the required tools set up with a basic understanding, we are all set to have a hands-on experience of how to write MapReduce jobs that use ES-Hadoop. In the next chapter, we will take a look at how the WordCount job is developed. We will also develop a couple of jobs for real-world scenarios that will write and read data to and from HDFS and Elasticsearch.

主站蜘蛛池模板: 荆州市| 澳门| 晋州市| 宁城县| 杭锦后旗| 丰台区| 镇沅| 石门县| 定南县| 石林| 开封市| 嘉峪关市| 东兰县| 襄垣县| 九龙县| 时尚| 桐乡市| 昭通市| 陈巴尔虎旗| 台中市| 连山| 闵行区| 年辖:市辖区| 漳浦县| 北川| 澎湖县| 大石桥市| 上饶县| 内江市| 秭归县| 都江堰市| 且末县| 泰顺县| 准格尔旗| 太仆寺旗| 神木县| 奉节县| 辉南县| 皮山县| 越西县| 宁德市|