- Fast Data Processing with Spark 2(Third Edition)
- Krishna Sankar
- 506字
- 2021-08-20 10:27:06
Chapter 1. Installing Spark and Setting Up Your Cluster
This chapter will detail some common methods to set up Spark. Spark on a single machine is excellent for testing or exploring small Datasets, but here you will also learn to use Spark's built-in deployment scripts with a dedicated cluster via Secure Shell (SSH). For Cloud deployments of Spark, this chapter will look at EC2 (both traditional and Elastic Map reduce). Feel free to skip this chapter if you already have your local Spark instance installed and want to get straight to programming. The best way to navigate through installation is to use this chapter as a guide and refer to the Spark installation documentation at http://spark.apache.org/docs/latest/cluster-overview.html.
Regardless of how you are going to deploy Spark, you will want to get the latest version of Spark from https://spark.apache.org/downloads.html (Version 2.0.0 as of this writing). Spark currently releases every 90 days. For coders who want to work with the latest builds, try cloning the code directly from the repository at https://github.com/apache/spark. The building instructions are available at https://spark.apache.org/docs/latest/building-spark.html. Both source code and prebuilt binaries are available at this link. To interact with Hadoop Distributed File System (HDFS), you need to use Spark, which is built against the same version of Hadoop as your cluster. For Version 2.0.0 of Spark, the prebuilt package is built against the available Hadoop Versions 2.3, 2.4, 2.6, and 2.7. If you are up for the challenge, it's recommended that you build against the source as it gives you the flexibility of choosing the HDFS version that you want to support as well as apply patches with. In this chapter, we will do both.
Tip
As you explore the latest version of Spark, an essential task is to read the release notes and especially what has been changed and deprecated. For 2.0.0, the list is slightly long and is available at https://spark.apache.org/releases/spark-release-2-0-0.html#removals-behavior-changes-and-deprecations. For example, the note talks about where the EC2 scripts have moved to and support for Hadoop 2.1 and earlier.
To compile the Spark source, you will need the appropriate version of Scala and the matching JDK. The Spark source tar
utility includes the required Scala components. The following discussion is only for information there is no need to install Scala.
The Spark developers have done a good job of managing the dependencies. Refer to the https://spark.apache.org/docs/latest/building-spark.html web page for the latest information on this. The website states that:
"Building Spark using Maven requires Maven 3.3.9 or newer and Java 7+."
Scala gets pulled down as a dependency by Maven (currently Scala 2.11.8). Scala does not need to be installed separately; it is just a bundled dependency.
Just as a note, Spark 2.0.0 by default runs with Scala 2.11.8, but can be compiled to run with Scala 2.10. I have just seen e-mails in the Spark users' group on this.
Tip
This brings up another interesting point about the Spark community. The two essential mailing lists are user@spark.apache.org and dev@spark.apache.org. More details about the Spark community are available at https://spark.apache.org/community.html.
- 程序員面試筆試寶典(第3版)
- 案例式C語言程序設計
- Node.js Design Patterns
- LabVIEW Graphical Programming Cookbook
- 跟小海龜學Python
- Scratch 3游戲與人工智能編程完全自學教程
- iOS編程基礎:Swift、Xcode和Cocoa入門指南
- 蘋果的產品設計之道:創建優秀產品、服務和用戶體驗的七個原則
- Learning Unreal Engine Android Game Development
- Scala編程實戰
- 零基礎看圖學ScratchJr:少兒趣味編程(全彩大字版)
- Shopify Application Development
- Mastering ASP.NET Core 2.0
- Spring Boot從入門到實戰
- Access 2016數據庫應用與開發:實戰從入門到精通(視頻教學版)