- Apache Hadoop 3 Quick Start Guide
- Hrishikesh Vijay Karambelkar
- 593字
- 2021-06-10 19:18:43
Running Hadoop in standalone mode
Now that you have successfully unzipped Hadoop, let's try and run a Hadoop program in standalone mode. As we mentioned in the introduction, Hadoop's standalone mode does not require any runtime; you can directly run your MapReduce program by running your compiled jar. We will look at how you can write MapReduce programs in the Chapter 4, Developing MapReduce Applications. For now, it's time to run a program we have already prepared. To download, compile, and run the sample program, simply take the following steps:
- You will need Maven and Git on your machine to proceed. Apache Maven can be set up with the following command:
hadoop@base0:/$ sudo apt-get install maven
- This will install Maven on your local machine. Try running the mvn command to see if it has been installed properly. Now, install Git on your local machine with the following command:
hadoop@base0:/$ sudo apt-get install git
- Now, create a folder in your home directory (such as src/) to keep all examples, and then run the following command to clone the Git repository locally:
hadoop@base0:/$ git clone https://github.com/PacktPublishing/
Apache-Hadoop-3-Quick-Start-Guide/ src/
- The preceding command will create a copy of your repository locally. Now go to folder 2/ for the relevant examples for Chapter 2, Planning and Setting Up Hadoop Clusters.
- Now run the following mvn command from the 2/ folder. This will start downloading artifacts from the internet that have a dependency to build an example project, as shown in the next screenshot:
hadoop@base0:/$ mvn

- Finally, you will get a build successful message. This means the jar, including your example, has been created and is ready to go. The next step is to use this jar to run the sample program which, in this case, provides a utility that allow users to supply a regular expression. The MapReduce program will then search across the given folder and bring up the matched content and its count.
- Let's now create an input folder and copy some documents into it. We will use a simple expression to get all the words that are separated by at least one white space. In that case, the expression will be \\s+. (Please refer to the standard Java documentation for information on how to create regular Java expressions for string patterns here.)
- Create a folder in which you can put sample text files for expression matching. Similarly, create an output folder to save output. To run the program, run the following command:
hadoop@base0:/$ <hadoop-home>/bin/hadoop jar
<location-of generated-jar> ExpressionFinder “\\s+” <folder-
containing-files-for input> <new-output-folder> > stdout.txt
In most cases, the location of the jar will be in the target folder inside the project's home. The command will create a MapReduce job, run the program, and then produce the output in the given output folder. A successful run should end with no errors, as shown in the following screenshot:

Similarly, the output folder will contain the files part-r-00000 and _SUCCESS. The file part-r-00000 should contain the output of your expression run on multiple files. You can play with other regular expressions if you wish. Here, we have simply run a regular expression program that can run over masses of files in a completely distributed manner. We will move on to look at the programming aspects of MapReduce in the Chapter 4, Developing MapReduce Applications.
- Hands-On Deep Learning with Apache Spark
- 零起步輕松學單片機技術(第2版)
- Instant Raspberry Pi Gaming
- 計算機應用
- Visual FoxPro 6.0數據庫與程序設計
- CompTIA Network+ Certification Guide
- 運動控制器與交流伺服系統的調試和應用
- 我也能做CTO之程序員職業規劃
- RedHat Linux用戶基礎
- 基于企業網站的顧客感知服務質量評價理論模型與實證研究
- 生物3D打印:從醫療輔具制造到細胞打印
- 電腦上網入門
- 三菱FX/Q系列PLC工程實例詳解
- C#求職寶典
- 計算機應用基礎學習指導與練習(Windows XP+Office 2003)