- Hadoop Real-World Solutions Cookbook(Second Edition)
- Tanmay Deshpande
- 280字
- 2021-07-09 20:02:50
Setting the HDFS block size for all the files in a cluster
In this recipe, we are going to take a look at how to set a block size at the cluster level.
Getting ready
To perform this recipe, you should already have a running Hadoop cluster.
How to do it...
The HDFS block size is configurable for all files in the cluster or for a single file as well. To change the block size at the cluster level itself, we need to modify the hdfs-site.xml
file.
By default, the HDFS block size is 128MB. In case we want to modify this, we need to update this property, as shown in the following code. This property changes the default block size to 64MB:
<property> <name>dfs.block.size</name> <value>67108864</value> <description>HDFS Block size</description> </property>
If you have a multi-node Hadoop cluster, you should update this file in the nodes, that is, NameNode
and DataNode
. Make sure you save these changes and restart the HDFS daemons:
/usr/local/hadoop/sbin/stop-dfs.sh /usr/local/hadoop/sbin/start-dfs.sh
This will set the block size for files that will now get added to the HDFS cluster. Make sure that this does not change the block size of the files that are already present in HDFS. There is no way to change the block sizes of existing files.
How it works...
By default, the HDFS block size is 128MB for Hadoop 2.X. Sometimes, we may want to change this default block size for optimization purposes. When this configuration is successfully updated, all the new files will be saved into blocks of this size. Ensure that these changes do not affect the files that are already present in HDFS; their block size will be defined at the time being copied.