- Apache Spark Graph Processing
- Rindra Ramamonjison
- 363字
- 2021-07-16 20:03:53
Downloading and installing Spark 1.4.1
In the following section, we will go through the Spark installation process in detail. Spark is built on Scala and runs on the Java Virtual Machine (JVM). Before installing Spark, you should first have Java Development Kit 7 (JDK) installed on your computer.
Make sure you install JDK instead of Java Runtime Environment (JRE). You can download it from http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html.
Next, download the latest release of Spark from the project website https://spark.apache.org/downloads.html. Perform the following three steps to get Spark installed on your computer:
- Select the package type: Pre-built for Hadoop 2.6 and later and then Direct Download. Make sure you choose a prebuilt version for Hadoop instead of the source code.
- Download the compressed TAR file called
spark-1.4.1-bin-hadoop2.6.tgz
and place it into a directory on your computer. - Open the terminal and change to the previous directory. Using the following commands, extract the TAR file, rename the Spark root folder to
spark-1.4.1
, and then list the installed files and subdirectories:tar -xf spark-1.4.1-bin-hadoop2.6.tgz mv spark-1.4.1-bin-hadoop2.6 spark-1.4.1 cd spark-1.4.1 ls
That's it! You now have Spark and its libraries installed on your computer. Note the following files and directories in the spark-1.4.1
home folder:
core
: This directory contains the source code for the core components and API of Sparkbin
: This directory contains the executable files that are used to submit and deploy Spark applications or also to interact with Spark in a Spark shellgraphx
,mllib
,sql
, andstreaming
: These are Spark libraries that provide a unified interface to do different types of data processing, namely graph processing, machine learning, queries, and stream processingexamples
: This directory contains demos and examples of Spark applications
It is often convenient to create shortcuts to the Spark home folder and Spark example folders. In Linux or Mac, open or create the ~/.bash_profile
file in your home folder and insert the following lines:
export SPARKHOME="/[Where you put Spark]/spark-1.4.1/" export SPARKSCALAEX="ls ../spark- 1.4.1/examples/src/main/scala/org/apache/spark/examples/"
Then, execute the following command for the previous shortcuts to take effect:
source ~/.bash_profile
As a result, you can quickly access these folders in the terminal or Spark shell. For example, the example named LiveJournalPageRank.scala
can be accessed with:
$SPARKSCALAEX/graphx/LiveJournalPageRank.scala
- UI設計基礎培訓教程
- INSTANT Mock Testing with PowerMock
- C語言程序設計(第3版)
- 自己動手實現Lua:虛擬機、編譯器和標準庫
- Learning SQLite for iOS
- SAP BusinessObjects Dashboards 4.1 Cookbook
- The HTML and CSS Workshop
- Web程序設計:ASP.NET(第2版)
- Android編程權威指南(第4版)
- VMware vSphere 5.5 Cookbook
- SQL Server 2008實用教程(第3版)
- 軟技能2:軟件開發者職業生涯指南
- Mastering VMware vSphere Storage
- TensorFlow.NET實戰
- Spring MVC Cookbook