官术网_书友最值得收藏!

Loading data from a local machine to HDFS

In this recipe, we are going to load data from a local machine's disk to HDFS.

Getting ready

To perform this recipe, you should have an already Hadoop running cluster.

How to do it...

Performing this recipe is as simple as copying data from one folder to another. There are a couple of ways to copy data from the local machine to HDFS.

  • Using the copyFromLocal command
    • To copy the file on HDFS, let's first create a directory on HDFS and then copy the file. Here are the commands to do this:
      hadoop fs -mkdir /mydir1
      hadoop fs -copyFromLocal /usr/local/hadoop/LICENSE.txt /mydir1
      
  • Using the put command
    • We will first create the directory, and then put the local file in HDFS:
      hadoop fs -mkdir /mydir2
      hadoop fs -put /usr/local/hadoop/LICENSE.txt /mydir2
      

You can validate that the files have been copied to the correct folders by listing the files:

hadoop fs -ls /mydir1
hadoop fs -ls /mydir2

How it works...

When you use HDFS copyFromLocal or the put command, the following things will occur:

  1. First of all, the HDFS client (the command prompt, in this case) contacts NameNode because it needs to copy the file to HDFS.
  2. NameNode then asks the client to break the file into chunks of different cluster block sizes. In Hadoop 2.X, the default block size is 128MB.
  3. Based on the capacity and availability of space in DataNodes, NameNode will decide where these blocks should be copied.
  4. Then, the client starts copying data to specified DataNodes for a specific block. The blocks are copied sequentially one after another.
  5. When a single block is copied, the block is sent to DataNode in packets that are 4MB in size. With each packet, a checksum is sent; once the packet copying is done, it is verified with checksum to check whether it matches. The packets are then sent to the next DataNode where the block will be replicated.
  6. The HDFS client's responsibility is to copy the data to only the first node; the replication is taken care by respective DataNode. Thus, the data block is pipelined from one DataNode to the next.
  7. When the block copying and replication is taking place, metadata on the file is updated in NameNode by DataNode.
主站蜘蛛池模板: 英超| 尚义县| 临朐县| 淄博市| 甘南县| 固原市| 渝北区| 无极县| 大邑县| 永泰县| 宁夏| 南川市| 华坪县| 固安县| 抚州市| 台南县| 江门市| 银川市| 太和县| 军事| 利津县| 襄樊市| 新乡市| 桂阳县| 军事| 密云县| 集安市| 澎湖县| 六枝特区| 伊金霍洛旗| 松阳县| 瓦房店市| 龙川县| 扎鲁特旗| 北安市| 景德镇市| 永和县| 英山县| 吉首市| 南平市| 虎林市|