官术网_书友最值得收藏!

Getting Up and Running with Spark

Apache Spark is a framework for distributed computing; this framework aims to make it simpler to write programs that run in parallel across many nodes in a cluster of computers or virtual machines. It tries to abstract the tasks of resource scheduling, job submission, execution, tracking, and communication between nodes as well as the low-level operations that are inherent in parallel data processing. It also provides a higher level API to work with distributed data. In this way, it is similar to other distributed processing frameworks such as Apache Hadoop; however, the underlying architecture is somewhat different.

Spark began as a research project at the AMP lab in University of California, Berkeley (https://amplab.cs.berkeley.edu/projects/spark-lightning-fast-cluster-computing/). The university was focused on the use case of distributed machine learning algorithms. Hence, it is designed from the ground up for high performance in applications of an iterative nature, where the same data is accessed multiple times. This performance is achieved primarily through caching datasets in memory combined with low latency and overhead to launch parallel computation tasks. Together with other features such as fault tolerance, flexible distributed-memory data structures, and a powerful functional API, Spark has proved to be broadly useful for a wide range of large-scale data processing tasks, over and above machine learning and iterative analytics.

Performance wise, Spark is much faster than Hadoop for related workloads. Refer to the following graph:

Source: https://amplab.cs.berkeley.edu/wp-content/uploads/2011/11/spark-lr.png

Spark runs in four modes:

  • The standalone local mode, where all Spark processes are run within the same Java Virtual Machine (JVM) process
  • The standalone cluster mode, using Spark's own built-in, job-scheduling framework
  • Using Mesos, a popular open source cluster-computing framework
  • Using YARN (commonly referred to as NextGen MapReduce), Hadoop

In this chapter, we will do the following:

  • Download the Spark binaries and set up a development environment that runs in Spark's standalone local mode. This environment will be used throughout the book to run the example code.
  • Explore Spark's programming model and API using Spark's interactive console.
  • Write our first Spark program in Scala, Java, R, and Python.
  • Set up a Spark cluster using Amazon's Elastic Cloud Compute (EC2) platform, which can be used for large-sized data and heavier computational requirements, rather than running in the local mode.
  • Set up a Spark Cluster using Amazon Elastic Map Reduce

If you have previous experience in setting up Spark and are familiar with the basics of writing a Spark program, feel free to skip this chapter.

主站蜘蛛池模板: 崇礼县| 宁河县| 封开县| 颍上县| 图木舒克市| 岳普湖县| 广丰县| 伊宁县| 五大连池市| 新河县| 台州市| 桂平市| 加查县| 临城县| 潞城市| 平陆县| 绥滨县| 新干县| 淮北市| 信丰县| 新余市| 余干县| 东台市| 南华县| 河池市| 江门市| 揭东县| 东平县| 乌海市| 彭阳县| 玉屏| 明溪县| 揭西县| 阳曲县| 泽普县| 扶沟县| 新疆| 赤峰市| 晋江市| 车致| 临沭县|