書名: Spark Cookbook作者名: Rishi Yadav本章字數: 335字更新時間: 2021-07-16 13:43:57
Building the Spark source code with Maven
Installing Spark using binaries works fine in most cases. For advanced cases, such as the following (but not limited to), compiling from the source code is a better option:
- Compiling for a specific Hadoop version
- Adding the Hive integration
- Adding the YARN integration
Getting ready
The following are the prerequisites for this recipe to work:
- Java 1.6 or a later version
- Maven 3.x
How to do it...
The following are the steps to build the Spark source code with Maven:
- Increase
MaxPermSize
for heap:$ echo "export _JAVA_OPTIONS=\"-XX:MaxPermSize=1G\"" >> /home/hduser/.bashrc
- Open a new terminal window and download the Spark source code from GitHub:
$ wget https://github.com/apache/spark/archive/branch-1.4.zip
- Unpack the archive:
$ gunzip branch-1.4.zip
- Move to the
spark
directory:$ cd spark
- Compile the sources with these flags: Yarn enabled, Hadoop version 2.4, Hive enabled, and skipping tests for faster compilation:
$ mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -DskipTests clean package
- Move the
conf
folder to theetc
folder so that it can be made a symbolic link:$ sudo mv spark/conf /etc/
- Move the
spark
directory to/opt
as it's an add-on software package:$ sudo mv spark /opt/infoobjects/spark
- Change the ownership of the
spark
home directory toroot
:$ sudo chown -R root:root /opt/infoobjects/spark
- Change the permissions of the
spark
home directory0755 = user:rwx group:r-x world:r-x
:$ sudo chmod -R 755 /opt/infoobjects/spark
- Move to the
spark
home directory:$ cd /opt/infoobjects/spark
- Create a symbolic link:
$ sudo ln -s /etc/spark conf
- Put the Spark executable in the path by editing
.bashrc
:$ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
- Create the
log
directory in/var
:$ sudo mkdir -p /var/log/spark
- Make
hduser
the owner of the Sparklog
directory:$ sudo chown -R hduser:hduser /var/log/spark
- Create the Spark
tmp
directory:$ mkdir /tmp/spark
- Configure Spark with the help of the following command lines:
$ cd /etc/spark $ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop" >> spark-env.sh $ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop" >> spark-env.sh $ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh $ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh
推薦閱讀
- Unity 2020 By Example
- 一步一步學Spring Boot 2:微服務項目實戰
- MySQL數據庫管理實戰
- Production Ready OpenStack:Recipes for Successful Environments
- Java虛擬機字節碼:從入門到實戰
- 單片機應用技術
- Cassandra Data Modeling and Analysis
- Apache Kafka Quick Start Guide
- Python:Deeper Insights into Machine Learning
- 零代碼實戰:企業級應用搭建與案例詳解
- 工業機器人離線編程
- Flask Web開發:基于Python的Web應用開發實戰(第2版)
- 零基礎學Java第2版
- 循序漸進Vue.js 3前端開發實戰
- Spring Boot 2+Thymeleaf企業應用實戰