- Hadoop 2.x Administration Cookbook
- Gurmukh Singh
- 376字
- 2021-07-09 20:10:24
Introduction
As Hadoop is a distributed system with many components, and has a reputation of getting quite complex, it is important to understand the basic Architecture before we start with the deployments.
In this chapter, we will take a look at the Architecture and the recipes to deploy a Hadoop cluster in various modes. This chapter will also cover recipes on commissioning and decommissioning nodes in a cluster.
The recipes in this chapter will primarily focus on deploying a cluster based on an Apache Hadoop distribution, as it is the best way to learn and explore Hadoop.
Note
While the recipes in this chapter will give you an overview of a typical configuration, we encourage you to adapt this design according to your needs. The deployment directory structure varies according to IT policies within an organization. All our deployments will be based on the Linux operating system, as it is the most commonly used platform for Hadoop in production. You can use any flavor of Linux; the recipes are very generic in nature and should work on all Linux flavors, with the appropriate changes in path and installation methods, such as yum
or apt-get
.
Overview of Hadoop Architecture
Hadoop is a framework and not a tool. It is a combination of various components, such as a filesystem, processing engine, data ingestion tools, databases, workflow execution tools, and so on. Hadoop is based on client-server Architecture with a master node for each storage layer and processing layer.
Namenode is the master for Hadoop distributed file system (HDFS) storage and ResourceManager is the master for YARN (Yet Another Resource Negotiator). The Namenode stores the file metadata and the actual blocks/data reside on the slave nodes called Datanodes. All the jobs are submitted to the ResourceManager and it then assigns tasks to its slaves, called NodeManagers. In a highly available cluster, we can have more than one Namenode and ResourceManager.
Both masters are each a single point of failure, which makes them very critical components of the cluster and so care must be taken to make them highly available.
Although there are many concepts to learn, such as application masters, containers, schedulers, and so on, as this is a recipe book, we will keep the theory to a minimum.
- 亮劍.NET:.NET深入體驗與實戰精要
- Machine Learning for Cybersecurity Cookbook
- TIBCO Spotfire:A Comprehensive Primer(Second Edition)
- 基于多目標決策的數據挖掘方法評估與應用
- 大學計算機應用基礎
- 水晶石精粹:3ds max & ZBrush三維數字靜幀藝術
- 網絡綜合布線設計與施工技術
- Mastering Game Development with Unreal Engine 4(Second Edition)
- Working with Linux:Quick Hacks for the Command Line
- AI的25種可能
- 一步步寫嵌入式操作系統
- 電動汽車驅動與控制技術
- 51單片機應用程序開發與實踐
- Eclipse RCP應用系統開發方法與實戰
- 輸送技術、設備與工業應用