官术网_书友最值得收藏!

  • Spark Cookbook
  • Rishi Yadav
  • 335字
  • 2021-07-16 13:43:57

Building the Spark source code with Maven

Installing Spark using binaries works fine in most cases. For advanced cases, such as the following (but not limited to), compiling from the source code is a better option:

  • Compiling for a specific Hadoop version
  • Adding the Hive integration
  • Adding the YARN integration

Getting ready

The following are the prerequisites for this recipe to work:

  • Java 1.6 or a later version
  • Maven 3.x

How to do it...

The following are the steps to build the Spark source code with Maven:

  1. Increase MaxPermSize for heap:
    $ echo "export _JAVA_OPTIONS=\"-XX:MaxPermSize=1G\"" >> /home/hduser/.bashrc
    
  2. Open a new terminal window and download the Spark source code from GitHub:
    $ wget https://github.com/apache/spark/archive/branch-1.4.zip
    
  3. Unpack the archive:
    $ gunzip branch-1.4.zip
    
  4. Move to the spark directory:
    $ cd spark
    
  5. Compile the sources with these flags: Yarn enabled, Hadoop version 2.4, Hive enabled, and skipping tests for faster compilation:
    $ mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -DskipTests clean package
    
  6. Move the conf folder to the etc folder so that it can be made a symbolic link:
    $ sudo mv spark/conf /etc/
    
  7. Move the spark directory to /opt as it's an add-on software package:
    $ sudo mv spark /opt/infoobjects/spark
    
  8. Change the ownership of the spark home directory to root:
    $ sudo chown -R root:root /opt/infoobjects/spark
    
  9. Change the permissions of the spark home directory 0755 = user:rwx group:r-x world:r-x:
    $ sudo chmod -R 755 /opt/infoobjects/spark
    
  10. Move to the spark home directory:
    $ cd /opt/infoobjects/spark
    
  11. Create a symbolic link:
    $ sudo ln -s /etc/spark conf
    
  12. Put the Spark executable in the path by editing .bashrc:
    $ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
    
  13. Create the log directory in /var:
    $ sudo mkdir -p /var/log/spark
    
  14. Make hduser the owner of the Spark log directory:
    $ sudo chown -R hduser:hduser /var/log/spark
    
  15. Create the Spark tmp directory:
    $ mkdir /tmp/spark
    
  16. Configure Spark with the help of the following command lines:
    $ cd /etc/spark
    $ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop" >> spark-env.sh
    $ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop" >> spark-env.sh
    $ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh
    $ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh
    
主站蜘蛛池模板: 炎陵县| 且末县| 柳河县| 枝江市| 北海市| 砚山县| 客服| 青海省| 临桂县| 万宁市| 浦东新区| 德惠市| 罗定市| 清徐县| 文山县| 宁城县| 金阳县| 黄梅县| 吴桥县| 三亚市| 榆社县| 吉安县| 博客| 江孜县| 万荣县| 衡东县| 湄潭县| 客服| 任丘市| 东乌珠穆沁旗| 车致| 通化市| 运城市| 和龙市| 望江县| 聂荣县| 阳原县| 随州市| 满城县| 舒兰市| 仲巴县|