- Machine Learning with Spark(Second Edition)
- Rajdeep Dua Manpreet Singh Ghotra Nick Pentreath
- 420字
- 2021-07-09 21:07:39
Getting Up and Running with Spark
Apache Spark is a framework for distributed computing; this framework aims to make it simpler to write programs that run in parallel across many nodes in a cluster of computers or virtual machines. It tries to abstract the tasks of resource scheduling, job submission, execution, tracking, and communication between nodes as well as the low-level operations that are inherent in parallel data processing. It also provides a higher level API to work with distributed data. In this way, it is similar to other distributed processing frameworks such as Apache Hadoop; however, the underlying architecture is somewhat different.
Spark began as a research project at the AMP lab in University of California, Berkeley (https://amplab.cs.berkeley.edu/projects/spark-lightning-fast-cluster-computing/). The university was focused on the use case of distributed machine learning algorithms. Hence, it is designed from the ground up for high performance in applications of an iterative nature, where the same data is accessed multiple times. This performance is achieved primarily through caching datasets in memory combined with low latency and overhead to launch parallel computation tasks. Together with other features such as fault tolerance, flexible distributed-memory data structures, and a powerful functional API, Spark has proved to be broadly useful for a wide range of large-scale data processing tasks, over and above machine learning and iterative analytics.
For more information, you can visit:
Performance wise, Spark is much faster than Hadoop for related workloads. Refer to the following graph:

Spark runs in four modes:
- The standalone local mode, where all Spark processes are run within the same Java Virtual Machine (JVM) process
- The standalone cluster mode, using Spark's own built-in, job-scheduling framework
- Using Mesos, a popular open source cluster-computing framework
- Using YARN (commonly referred to as NextGen MapReduce), Hadoop
In this chapter, we will do the following:
- Download the Spark binaries and set up a development environment that runs in Spark's standalone local mode. This environment will be used throughout the book to run the example code.
- Explore Spark's programming model and API using Spark's interactive console.
- Write our first Spark program in Scala, Java, R, and Python.
- Set up a Spark cluster using Amazon's Elastic Cloud Compute (EC2) platform, which can be used for large-sized data and heavier computational requirements, rather than running in the local mode.
- Set up a Spark Cluster using Amazon Elastic Map Reduce
If you have previous experience in setting up Spark and are familiar with the basics of writing a Spark program, feel free to skip this chapter.
- 平面設計初步
- AutoCAD快速入門與工程制圖
- Machine Learning for Cybersecurity Cookbook
- Go Machine Learning Projects
- Linux Mint System Administrator’s Beginner's Guide
- 數(shù)控銑削(加工中心)編程與加工
- Learning C for Arduino
- Ruby on Rails敏捷開發(fā)最佳實踐
- Lightning Fast Animation in Element 3D
- Hadoop應用開發(fā)基礎
- 自動化生產(chǎn)線安裝與調試(三菱FX系列)(第二版)
- Puppet 3 Beginner’s Guide
- 電動汽車驅動與控制技術
- 穿越計算機的迷霧
- 自適應學習:人工智能時代的教育革命