官术网_书友最值得收藏!

Setting the HDFS block size for all the files in a cluster

In this recipe, we are going to take a look at how to set a block size at the cluster level.

Getting ready

To perform this recipe, you should already have a running Hadoop cluster.

How to do it...

The HDFS block size is configurable for all files in the cluster or for a single file as well. To change the block size at the cluster level itself, we need to modify the hdfs-site.xml file.

By default, the HDFS block size is 128MB. In case we want to modify this, we need to update this property, as shown in the following code. This property changes the default block size to 64MB:

<property>
<name>dfs.block.size</name>
    <value>67108864</value>
    <description>HDFS Block size</description>
</property>

If you have a multi-node Hadoop cluster, you should update this file in the nodes, that is, NameNode and DataNode. Make sure you save these changes and restart the HDFS daemons:

/usr/local/hadoop/sbin/stop-dfs.sh
/usr/local/hadoop/sbin/start-dfs.sh

This will set the block size for files that will now get added to the HDFS cluster. Make sure that this does not change the block size of the files that are already present in HDFS. There is no way to change the block sizes of existing files.

How it works...

By default, the HDFS block size is 128MB for Hadoop 2.X. Sometimes, we may want to change this default block size for optimization purposes. When this configuration is successfully updated, all the new files will be saved into blocks of this size. Ensure that these changes do not affect the files that are already present in HDFS; their block size will be defined at the time being copied.

主站蜘蛛池模板: 伊通| 揭西县| 五家渠市| 馆陶县| 交口县| 景泰县| 虞城县| 孟连| 鄂伦春自治旗| 化德县| 湟源县| 天台县| 乐业县| 册亨县| 交城县| 宜丰县| 桐梓县| 拜泉县| 新干县| 镇沅| 漳浦县| 西充县| 兴义市| 施秉县| 平安县| 榆中县| 三穗县| 西畴县| 桂平市| 开封县| 杭锦后旗| 盐城市| 新余市| 海林市| 塘沽区| 小金县| 于都县| 比如县| 鹿邑县| 东港市| 开阳县|