- Apache Hadoop 3 Quick Start Guide
- Hrishikesh Vijay Karambelkar
- 339字
- 2021-06-10 19:18:41
DataNode
DataNode in the Hadoop ecosystem is primarily responsible for storing application data in distributed and replicated form. It acts as a slave in the system and is controlled by NameNode. Each disk in the Hadoop system is divided into multiple blocks, just like a traditional computer storage device. A block is a minimal unit in which the data can be read or written by the Hadoop filesystem. This ecosystem gives a natural advantage in slicing large files into these blocks and storing them across multiple nodes. The default block size of data node varies from 64 MB to 128 MB, depending upon Hadoop implementation. This can be changed through the configuration of data node. HDFS is designed to support very large file sizes and for write-once-read-many-based semantics.
Data nodes are primarily responsible for storing and retrieving these blocks when they are requested by consumers through Name Node. In Hadoop version 3.X, DataNode not only stores the data in blocks, but also the checksum or parity of the original blocks in a distributed manner. DataNodes follow the replication pipeline mechanism to store data in chunks propagating portions to other data nodes.
When a cluster starts, NameNode starts in a safe mode, until the data nodes register the data block information with NameNode. Once this is validated, it starts engaging with clients for serving the requests. When a data node starts, it first connects with Name Node, reporting all of the information about its data blocks' availability. This information is registered in NameNode, and when a client requests information about a certain block, NameNode points to the respective data not from its registry. The client then interacts with DataNode directly to read/write the data block. During the cluster processing, data node communicates with name node periodically, sending a heartbeat signal. The frequency of the heartbeat can be configured through configuration files.
We have gone through different key architecture components of the Apache Hadoop framework; we will be getting a deeper understanding in each of these areas in the next chapters.
- 后稀缺:自動化與未來工作
- 大數據導論:思維、技術與應用
- Unreal Engine:Game Development from A to Z
- Mastering Hadoop 3
- Zabbix Network Monitoring(Second Edition)
- JMAG電機電磁仿真分析與實例解析
- 分布式多媒體計算機系統
- 電腦上網輕松入門
- Spatial Analytics with ArcGIS
- 學練一本通:51單片機應用技術
- IBM? SmartCloud? Essentials
- Excel 2007終極技巧金典
- 生物3D打?。簭尼t療輔具制造到細胞打印
- 基于Proteus的單片機應用技術
- 智能制造系統及關鍵使能技術