官术网_书友最值得收藏!

Installing and setting up Spark locally

Spark can be run using the built-in standalone cluster scheduler in the local mode. This means that all the Spark processes are run within the same JVM-effectively, a single, multithreaded instance of Spark. The local mode is very used for prototyping, development, debugging, and testing. However, this mode can also be useful in real-world scenarios to perform parallel computation across multiple cores on a single computer.

As Spark's local mode is fully compatible with the cluster mode; programs written and tested locally can be run on a cluster with just a few additional steps.

The first step in setting up Spark locally is to download the latest version http://spark.apache.org/downloads.html, which contains links to download various versions of Spark as well as to obtain the latest source code via GitHub.

The documents/docs available at http://spark.apache.org/docs/latest/ are a comprehensive resource to learn more about Spark. We highly recommend that you explore it!

Spark needs to be built against a specific version of Hadoop in order to access Hadoop Distributed File System (HDFS) as well as standard and custom Hadoop input sources Cloudera's Hadoop Distribution, MapR's Hadoop distribution, and Hadoop 2 (YARN). Unless you wish to build Spark against a specific Hadoop version, we recommend that you download the prebuilt Hadoop 2.7 package from an Apache mirror from http://d3kbcqa49mib13.cloudfront.net/spark-2.0.2-bin-hadoop2.7.tgz.

Spark requires the Scala programming language (version 2.10.x or 2.11.x at the time of writing this book) in order to run. Fortunately, the prebuilt binary package comes with the Scala runtime packages included, so you don't need to install Scala separately in order to get started. However, you will need to have a Java Runtime Environment (JRE) or Java Development Kit (JDK).

Refer to the software and hardware list in this book's code bundle for installation instructions. R 3.1+ is needed.

Once you have downloaded the Spark binary package, unpack the contents of the package and change it to the newly created directory by running the following commands:

  $ tar xfvz spark-2.0.0-bin-hadoop2.7.tgz
$ cd spark-2.0.0-bin-hadoop2.7

Spark places user scripts to run Spark in the bin directory. You can test whether everything is working correctly by running one of the example programs included in Spark. Run the following command:

  $ bin/run-example SparkPi 100

This will run the example in Spark's local standalone mode. In this mode, all the Spark processes are run within the same JVM, and Spark uses multiple threads for parallel processing. By default, the preceding example uses a number of threads equal to the number of cores available on your system. Once the program is executed, you should see something similar to the following lines toward the end of the output:

...
16/11/24 14:41:58 INFO Executor: Finished task 99.0 in stage 0.0
(TID 99). 872 bytes result sent to driver

16/11/24 14:41:58 INFO TaskSetManager: Finished task 99.0 in stage
0.0 (TID 99) in 59 ms on localhost (100/100)

16/11/24 14:41:58 INFO DAGScheduler: ResultStage 0 (reduce at
SparkPi.scala:38) finished in 1.988 s

16/11/24 14:41:58 INFO TaskSchedulerImpl: Removed TaskSet 0.0,
whose tasks have all completed, from pool

16/11/24 14:41:58 INFO DAGScheduler: Job 0 finished: reduce at
SparkPi.scala:38, took 2.235920 s

Pi is roughly 3.1409527140952713

The preceding command calls class org.apache.spark.examples.SparkPi class.

This class takes parameter in the local[N] form, where N is the number of threads to use. For example, to use only two threads, run the following command instead:N is the number of threads to use. Giving local[*] will use all of the cores on the local machine--that is a common usage.

To use only two threads, run the following command instead:

  $ ./bin/spark-submit  --class org.apache.spark.examples.SparkPi 
--master local[2] ./examples/jars/spark-examples_2.11-2.0.0.jar 100
主站蜘蛛池模板: 中超| 五华县| 河北省| 喀喇| 宁波市| 黄骅市| 惠来县| 南雄市| 阜新市| 泾源县| 宁波市| 济南市| 永昌县| 出国| 漳平市| 中方县| 四川省| 浮山县| 华安县| 拉孜县| 花垣县| 新兴县| 孝感市| 镇赉县| 蛟河市| 宝坻区| 凯里市| 黄平县| 平顶山市| 义马市| 建平县| 庄河市| 延长县| 靖安县| 南开区| 郑州市| 原平市| 昌吉市| 安福县| 塔城市| 南阳市|