官术网_书友最值得收藏!

Setting the HDFS block size for all the files in a cluster

In this recipe, we are going to take a look at how to set a block size at the cluster level.

Getting ready

To perform this recipe, you should already have a running Hadoop cluster.

How to do it...

The HDFS block size is configurable for all files in the cluster or for a single file as well. To change the block size at the cluster level itself, we need to modify the hdfs-site.xml file.

By default, the HDFS block size is 128MB. In case we want to modify this, we need to update this property, as shown in the following code. This property changes the default block size to 64MB:

<property>
<name>dfs.block.size</name>
    <value>67108864</value>
    <description>HDFS Block size</description>
</property>

If you have a multi-node Hadoop cluster, you should update this file in the nodes, that is, NameNode and DataNode. Make sure you save these changes and restart the HDFS daemons:

/usr/local/hadoop/sbin/stop-dfs.sh
/usr/local/hadoop/sbin/start-dfs.sh

This will set the block size for files that will now get added to the HDFS cluster. Make sure that this does not change the block size of the files that are already present in HDFS. There is no way to change the block sizes of existing files.

How it works...

By default, the HDFS block size is 128MB for Hadoop 2.X. Sometimes, we may want to change this default block size for optimization purposes. When this configuration is successfully updated, all the new files will be saved into blocks of this size. Ensure that these changes do not affect the files that are already present in HDFS; their block size will be defined at the time being copied.

主站蜘蛛池模板: 射阳县| 柳林县| 荆州市| 清水河县| 岳西县| 武隆县| 桃江县| 德保县| 八宿县| 达拉特旗| 韶山市| 滨海县| 沁阳市| 县级市| 荔波县| 石门县| 吉林市| 彭水| 满洲里市| 赤水市| 青冈县| 陆川县| 九龙城区| 襄城县| 页游| 临邑县| 喀什市| 潜山县| 镇平县| 论坛| 徐水县| 漳州市| 三台县| 和政县| 吕梁市| 新郑市| 龙里县| 禄丰县| 台东市| 东港市| 元氏县|