- Machine Learning with Spark(Second Edition)
- Rajdeep Dua Manpreet Singh Ghotra Nick Pentreath
- 420字
- 2021-07-09 21:07:39
Getting Up and Running with Spark
Apache Spark is a framework for distributed computing; this framework aims to make it simpler to write programs that run in parallel across many nodes in a cluster of computers or virtual machines. It tries to abstract the tasks of resource scheduling, job submission, execution, tracking, and communication between nodes as well as the low-level operations that are inherent in parallel data processing. It also provides a higher level API to work with distributed data. In this way, it is similar to other distributed processing frameworks such as Apache Hadoop; however, the underlying architecture is somewhat different.
Spark began as a research project at the AMP lab in University of California, Berkeley (https://amplab.cs.berkeley.edu/projects/spark-lightning-fast-cluster-computing/). The university was focused on the use case of distributed machine learning algorithms. Hence, it is designed from the ground up for high performance in applications of an iterative nature, where the same data is accessed multiple times. This performance is achieved primarily through caching datasets in memory combined with low latency and overhead to launch parallel computation tasks. Together with other features such as fault tolerance, flexible distributed-memory data structures, and a powerful functional API, Spark has proved to be broadly useful for a wide range of large-scale data processing tasks, over and above machine learning and iterative analytics.
For more information, you can visit:
Performance wise, Spark is much faster than Hadoop for related workloads. Refer to the following graph:

Spark runs in four modes:
- The standalone local mode, where all Spark processes are run within the same Java Virtual Machine (JVM) process
- The standalone cluster mode, using Spark's own built-in, job-scheduling framework
- Using Mesos, a popular open source cluster-computing framework
- Using YARN (commonly referred to as NextGen MapReduce), Hadoop
In this chapter, we will do the following:
- Download the Spark binaries and set up a development environment that runs in Spark's standalone local mode. This environment will be used throughout the book to run the example code.
- Explore Spark's programming model and API using Spark's interactive console.
- Write our first Spark program in Scala, Java, R, and Python.
- Set up a Spark cluster using Amazon's Elastic Cloud Compute (EC2) platform, which can be used for large-sized data and heavier computational requirements, rather than running in the local mode.
- Set up a Spark Cluster using Amazon Elastic Map Reduce
If you have previous experience in setting up Spark and are familiar with the basics of writing a Spark program, feel free to skip this chapter.
- Expert AWS Development
- 網絡安全與防護
- 悟透AutoCAD 2009完全自學手冊
- PostgreSQL 10 Administration Cookbook
- Chef:Powerful Infrastructure Automation
- Dreamweaver CS6中文版多功能教材
- Excel 2010函數與公式速查手冊
- 網絡存儲·數據備份與還原
- 運動控制系統(第2版)
- 中國戰略性新興產業研究與發展·數控系統
- 樂高創意機器人教程(中級 上冊 10~16歲) (青少年iCAN+創新創意實踐指導叢書)
- Moodle 2.0 Course Conversion(Second Edition)
- 數字多媒體技術與應用實例
- 運動控制系統
- 暗戰強人:黑客攻防入門全程圖解