- Apache Hadoop 3 Quick Start Guide
- Hrishikesh Vijay Karambelkar
- 139字
- 2021-06-10 19:18:45
Workload and computational requirements
While the previous two areas cover the sizing of the cluster, the workload requirements drive the computational capabilities of the cluster. All CPU-intensive operations require a higher count of CPUs and better configuration for computing. The number of Mapper and Reducer jobs that are run as a part of Hadoop also contribute to the requirements. Mapper tasks are usually higher than Reducer tasks, for example. The ratio of Mapper and Reducer is determined by processing requirements at both ends.
There is no definitive count that one can reach regarding memory and CPU requirements, as they vary based on replicas of block, the computational processing of tasks, and data storage needs. To help with this, we have provided a calculator which considers different configurations of a Hadoop cluster, such as CPU-intensive, memory-intensive, and balanced.
- 大數據導論:思維、技術與應用
- Mastercam 2017數控加工自動編程經典實例(第4版)
- Natural Language Processing Fundamentals
- Learning Apache Spark 2
- Hands-On Cloud Solutions with Azure
- 精通Excel VBA
- 大數據技術與應用
- Windows游戲程序設計基礎
- 電腦主板現場維修實錄
- Machine Learning with Apache Spark Quick Start Guide
- 云原生架構進階實戰
- 貫通Java Web開發三劍客
- 生物3D打?。簭尼t療輔具制造到細胞打印
- Ansible 2 Cloud Automation Cookbook
- 會聲會影X4中文版從入門到精通