- Apache Hadoop 3 Quick Start Guide
- Hrishikesh Vijay Karambelkar
- 339字
- 2021-06-10 19:18:41
DataNode
DataNode in the Hadoop ecosystem is primarily responsible for storing application data in distributed and replicated form. It acts as a slave in the system and is controlled by NameNode. Each disk in the Hadoop system is divided into multiple blocks, just like a traditional computer storage device. A block is a minimal unit in which the data can be read or written by the Hadoop filesystem. This ecosystem gives a natural advantage in slicing large files into these blocks and storing them across multiple nodes. The default block size of data node varies from 64 MB to 128 MB, depending upon Hadoop implementation. This can be changed through the configuration of data node. HDFS is designed to support very large file sizes and for write-once-read-many-based semantics.
Data nodes are primarily responsible for storing and retrieving these blocks when they are requested by consumers through Name Node. In Hadoop version 3.X, DataNode not only stores the data in blocks, but also the checksum or parity of the original blocks in a distributed manner. DataNodes follow the replication pipeline mechanism to store data in chunks propagating portions to other data nodes.
When a cluster starts, NameNode starts in a safe mode, until the data nodes register the data block information with NameNode. Once this is validated, it starts engaging with clients for serving the requests. When a data node starts, it first connects with Name Node, reporting all of the information about its data blocks' availability. This information is registered in NameNode, and when a client requests information about a certain block, NameNode points to the respective data not from its registry. The client then interacts with DataNode directly to read/write the data block. During the cluster processing, data node communicates with name node periodically, sending a heartbeat signal. The frequency of the heartbeat can be configured through configuration files.
We have gone through different key architecture components of the Apache Hadoop framework; we will be getting a deeper understanding in each of these areas in the next chapters.
- Big Data Analytics with Hadoop 3
- 高效能辦公必修課:Word圖文處理
- Unreal Engine:Game Development from A to Z
- 工業機器人技術及應用
- Linux Mint System Administrator’s Beginner's Guide
- Getting Started with MariaDB
- 基于多目標決策的數據挖掘方法評估與應用
- Grome Terrain Modeling with Ogre3D,UDK,and Unity3D
- OpenStack Cloud Computing Cookbook
- Learning Azure Cosmos DB
- 工業機器人維護與保養
- Bayesian Analysis with Python
- Linux Shell編程從初學到精通
- Flink原理與實踐
- Web編程基礎