官术网_书友最值得收藏!

Deploying on a cluster in standalone mode

Compute resources in a distributed environment need to be managed so that resource utilization is efficient and every job gets a fair chance to run. Spark comes along with its own cluster manager conveniently called standalone mode. Spark also supports working with YARN and Mesos cluster managers.

The cluster manager that should be chosen is mostly driven by both legacy concerns and whether other frameworks, such as MapReduce, are sharing the same compute resource pool. If your cluster has legacy MapReduce jobs running, and all of them cannot be converted to Spark jobs, it is a good idea to use YARN as the cluster manager. Mesos is emerging as a data center operating system to conveniently manage jobs across frameworks, and is very compatible with Spark.

If the Spark framework is the only framework in your cluster, then standalone mode is good enough. As Spark evolves as technology, you will see more and more use cases of Spark being used as the standalone framework serving all big data compute needs. For example, some jobs may be using Apache Mahout at present because MLlib does not have a specific machine-learning library, which the job needs. As soon as MLlib gets this library, this particular job can be moved to Spark.

Getting ready

Let's consider a cluster of six nodes as an example setup: one master and five slaves (replace them with actual node names in your cluster):

Master
m1.zettabytes.com
Slaves
s1.zettabytes.com
s2.zettabytes.com
s3.zettabytes.com
s4.zettabytes.com
s5.zettabytes.com

How to do it...

  1. Since Spark's standalone mode is the default, all you need to do is to have Spark binaries installed on both master and slave machines. Put /opt/infoobjects/spark/sbin in path on every node:
    $ echo "export PATH=$PATH:/opt/infoobjects/spark/sbin" >> /home/hduser/.bashrc
    
  2. Start the standalone master server (SSH to master first):
    hduser@m1.zettabytes.com~] start-master.sh
    

    Master, by default, starts on port 7077, which slaves use to connect to it. It also has a web UI at port 8088.

  3. Please SSH to master node and start slaves:
    hduser@s1.zettabytes.com~] spark-class org.apache.spark.deploy.worker.Worker spark://m1.zettabytes.com:7077
    

  4. Rather than manually starting master and slave daemons on each node, it can also be accomplished using cluster launch scripts.
  5. First, create the conf/slaves file on a master node and add one line per slave hostname (using an example of five slaves nodes, replace with the DNS of slave nodes in your cluster):
    hduser@m1.zettabytes.com~] echo "s1.zettabytes.com" >> conf/slaves
    hduser@m1.zettabytes.com~] echo "s2.zettabytes.com" >> conf/slaves
    hduser@m1.zettabytes.com~] echo "s3.zettabytes.com" >> conf/slaves
    hduser@m1.zettabytes.com~] echo "s4.zettabytes.com" >> conf/slaves
    hduser@m1.zettabytes.com~] echo "s5.zettabytes.com" >> conf/slaves
    

    Once the slave machine is set up, you can call the following scripts to start/stop cluster:

  6. Connect an application to the cluster through the Scala code:
    val sparkContext = new SparkContext(new SparkConf().setMaster("spark://m1.zettabytes.com:7077")
    
  7. Connect to the cluster through Spark shell:
    $ spark-shell --master spark://master:7077
    

How it works...

In standalone mode, Spark follows the master slave architecture, very much like Hadoop, MapReduce, and YARN. The compute master daemon is called Spark master and runs on one master node. Spark master can be made highly available using ZooKeeper. You can also add more standby masters on the fly, if needed.

The compute slave daemon is called worker and is on each slave node. The worker daemon does the following:

  • Reports the availability of compute resources on a slave node, such as the number of cores, memory, and others, to Spark master
  • Spawns the executor when asked to do so by Spark master
  • Restarts the executor if it dies

There is, at most, one executor per application per slave machine.

Both Spark master and worker are very lightweight. Typically, memory allocation between 500 MB to 1 GB is sufficient. This value can be set in conf/spark-env.sh by setting the SPARK_DAEMON_MEMORY parameter. For example, the following configuration will set the memory to 1 gigabits for both master and worker daemon. Make sure you have sudo as the super user before running it:

$ echo "export SPARK_DAEMON_MEMORY=1g" >> /opt/infoobjects/spark/conf/spark-env.sh

By default, each slave node has one worker instance running on it. Sometimes, you may have a few machines that are more powerful than others. In that case, you can spawn more than one worker on that machine by the following configuration (only on those machines):

$ echo "export SPARK_WORKER_INSTANCES=2" >> /opt/infoobjects/spark/conf/spark-env.sh

Spark worker, by default, uses all cores on the slave machine for its executors. If you would like to limit the number of cores the worker can use, you can set it to that number (for example, 12) by the following configuration:

$ echo "export SPARK_WORKER_CORES=12" >> /opt/infoobjects/spark/conf/spark-env.sh

Spark worker, by default, uses all the available RAM (1 GB for executors). Note that you cannot allocate how much memory each specific executor will use (you can control this from the driver configuration). To assign another value for the total memory (for example, 24 GB) to be used by all executors combined, execute the following setting:

$ echo "export SPARK_WORKER_MEMORY=24g" >> /opt/infoobjects/spark/conf/spark-env.sh

There are some settings you can do at the driver level:

  • To specify the maximum number of CPU cores to be used by a given application across the cluster, you can set the spark.cores.max configuration in Spark submit or Spark shell as follows:
    $ spark-submit --conf spark.cores.max=12
    
  • To specify the amount of memory each executor should be allocated (the minimum recommendation is 8 GB), you can set the spark.executor.memory configuration in Spark submit or Spark shell as follows:
    $ spark-submit --conf spark.executor.memory=8g
    

The following diagram depicts the high-level architecture of a Spark cluster:

See also

主站蜘蛛池模板: 三门峡市| 策勒县| 齐齐哈尔市| 凤山县| 温州市| 花垣县| 水城县| 洛隆县| 稻城县| 旬阳县| 香港| 定南县| 崇信县| 尖扎县| 离岛区| 义乌市| 锡林郭勒盟| 东阿县| 广昌县| 呼伦贝尔市| 望奎县| 吐鲁番市| 井研县| 岳池县| 金湖县| 乐东| 青铜峡市| 古蔺县| 南昌市| 静乐县| 休宁县| 南华县| 永和县| 连州市| 社会| 上蔡县| 台南县| 虹口区| 南召县| 凯里市| 密云县|