- Hadoop 2.x Administration Cookbook
- Gurmukh Singh
- 270字
- 2021-07-09 20:10:27
Configuring HDFS block size
Getting ready
To step through the recipes in this chapter, make sure you have completed the recipes in Chapter 1, Hadoop Architecture and Deployment or at least understand the basic Hadoop cluster setup.
How to do it...
ssh
to the master node, which is Namenode, and navigate to the directory where Hadoop is installed. In the previous chapter, Hadoop was installed at/opt/cluster/hadoop
:$ ssh root@10.0.0.4
- Change to the
Hadoop
user, or any other user that is running Hadoop, by using the following:$ sudo su - hadoop
- Edit the
hdfs-site.xml
file and modify the parameter to reflect the changes, as shown in the following screenshot: dfs.blocksize
is the parameter that decides on the value of the HDFS block size. The unit is bytes and the default value is 64 MB in Hadoop 1 and 128 MB in Hadoop 2. The block size can be configured according to the need.- Once the changes are made to
hdfs-site.xml
, copy the file across all nodes in the cluster. - Then restart the Namenode and
datanode
daemons on all nodes. - The block size can be configured per file by specifying it during the copy process, as shown in the following screenshot:
How it works...
The best practice is to keep the configurations the same across all nodes in the cluster, but it is not mandatory. For example, the block size of Namenode can be different from that of the edge node. In that case, the parameters on the source node will be effective. It means that the parameter on the node from which the copying is done will be in effect.
推薦閱讀
- Circos Data Visualization How-to
- Deep Learning Quick Reference
- 7天精通Dreamweaver CS5網頁設計與制作
- 返璞歸真:UNIX技術內幕
- HBase Design Patterns
- 機艙監(jiān)測與主機遙控
- 基于Xilinx ISE的FPAG/CPLD設計與應用
- 計算機與信息技術基礎上機指導
- 大數(shù)據案例精析
- Mastering OpenStack(Second Edition)
- 案例解說Delphi典型控制應用
- 智能+:制造業(yè)的智能化轉型
- 開放自動化系統(tǒng)應用與實戰(zhàn):基于標準建模語言IEC 61499
- 人工智能基礎
- 工業(yè)機器人與自控系統(tǒng)的集成應用