官术网_书友最值得收藏!

Installing the prebuilt distribution

Let's download prebuilt Spark and install it. Later, we will also compile a version and build from the source. The download is straightforward. The download page is at http://spark.apache.org/downloads.html. Select the options as shown in the following screenshot:

We will use wget from the command line. You can do a direct download as well:

cd /opt
sudo wget http://www-us.apache.org/dist/spark/spark-2.0.0/spark-2.0.0-bin-hadoop2.7.tgz

We are downloading the prebuilt version for Apache Hadoop 2.7 from one of the possible mirrors. We could have easily downloaded other prebuilt versions as well, as shown in the following screenshot:

To uncompress it, execute the following command:

sudo tar xvf spark-2.0.0-bin-hadoop2.7.tgz

To test the installation, run the following command:

/opt/spark-2.0.0-bin-hadoop2.7/bin/run-example SparkPi 10

It will fire up the Spark stack and calculate the value of Pi. The result will be as shown in the following screenshot:

主站蜘蛛池模板: 饶平县| 濮阳市| 新晃| 波密县| 吉木萨尔县| 黔南| 墨竹工卡县| 瑞丽市| 平远县| 珲春市| 法库县| 东阿县| 郎溪县| 永济市| 延津县| 鲁甸县| 徐州市| 勐海县| 宁化县| 馆陶县| 十堰市| 县级市| 呼伦贝尔市| 宁陕县| 丹江口市| 临安市| 陆河县| 榆中县| 普兰店市| 宜春市| 三江| 渭南市| 长兴县| 邵阳市| 桂平市| 株洲市| 股票| 浦北县| 砚山县| 化隆| 城市|