- Apache Hadoop 3 Quick Start Guide
- Hrishikesh Vijay Karambelkar
- 197字
- 2021-06-10 19:18:44
Organizational data growth
Although Hadoop allows you to add and remove new nodes dynamically for on-premise cluster setup, it is never a day-to-day task. So, when you approach sizing, you must be cognizant of data growth over the years. For example, if you are building a cluster to process social media analytics, and the organization expects to add x pages a month for processing, sizing needs to be computed accordingly. You may start computing data generation for each with the following formula:
Data Generated in Year X = Data Generated in Year (X-1) X (1 * % Growth) + Data coming from additional sources in year X.
The following image shows a cluster sizing calculator, which can be used to compute the size of your cluster based on data growth (Excel attached). In this case, for the first year, last year's data can provide an initial size estimate:

While we work through storage sizing, it is worth pointing out another interesting difference between Hadoop and traditional storage systems, that is, Hadoop does not require RAID servers. This is because it does not add value primarily due to the underlying data replication of HDFS, scalability, and high-availability capability.
- 亮劍.NET:.NET深入體驗與實戰精要
- Hadoop 2.x Administration Cookbook
- 蕩胸生層云:C語言開發修行實錄
- ServiceNow Cookbook
- Supervised Machine Learning with Python
- 控制系統計算機仿真
- Deep Reinforcement Learning Hands-On
- 突破,Objective-C開發速學手冊
- 奇點將至
- 液壓機智能故障診斷方法集成技術
- INSTANT Puppet 3 Starter
- 筆記本電腦維修之電路分析基礎
- Puppet 3 Beginner’s Guide
- Hands-On Deep Learning with Go
- 伺服與運動控制系統設計