- Machine Learning with Spark(Second Edition)
- Rajdeep Dua Manpreet Singh Ghotra Nick Pentreath
- 241字
- 2021-07-09 21:07:39
Spark clusters
A Spark cluster is made up of two types of processes: a driver program and multiple executors. In the local mode, all these processes are run within the same JVM. In a cluster, these processes are usually run on separate nodes.
For example, a typical cluster that runs in Spark's standalone mode (that is, using Spark's built-in cluster management modules) will have the following:
- A master node that runs the Spark standalone master process as well as the driver program
- A number of worker nodes, each running an executor process
While we will be using Spark's local standalone mode throughout this book to illustrate concepts and examples, the same Spark code that we write can be run on a Spark cluster. In the preceding example, if we run the code on a Spark standalone cluster, we could simply pass in the URL for the master node, as follows:
$ MASTER=spark://IP:PORT --class org.apache.spark.examples.SparkPi
./examples/jars/spark-examples_2.11-2.0.0.jar 100
Here, IP is the IP address and PORT is the port of the Spark master. This tells Spark to run the program on the cluster where the Spark master process is running.
A full treatment of Spark's cluster management and deployment is beyond the scope of this book. However, we will briefly teach you how to set up and use an Amazon EC2 cluster later in this chapter.
For an overview of the Spark cluster-application deployment, take a look at the following links:
- 虛擬儀器設計測控應用典型實例
- Java編程全能詞典
- 樂高機器人EV3設計指南:創造者的搭建邏輯
- Mastering VMware vSphere 6.5
- INSTANT Varnish Cache How-to
- 基于多目標決策的數據挖掘方法評估與應用
- Pig Design Patterns
- Implementing Oracle API Platform Cloud Service
- LAMP網站開發黃金組合Linux+Apache+MySQL+PHP
- INSTANT VMware vCloud Starter
- 筆記本電腦維修之電路分析基礎
- 機器人剛柔耦合動力學
- 菜鳥起飛五筆打字高手
- 運動控制系統
- Deep Learning with PyTorch Quick Start Guide