- Fast Data Processing with Spark 2(Third Edition)
- Krishna Sankar
- 213字
- 2021-08-20 10:27:07
A single machine
A single machine is the simplest use case for Spark. It is also a great way to sanity check your build. In spark/bin
, there is a shell script called run-example
, which can be used to launch a Spark job. The run-example
script takes the name of a Spark class and some arguments. Earlier, we used the run-example
script from the /bin
directory to calculate the value of Pi. There is a collection of the sample Spark jobs in examples/src/main/scala/org/apache/spark/examples/
.
All of the sample programs take the parameter, master
(the cluster manager), which can be the URL of a distributed cluster or local[N]
, where N
is the number of threads.
Going back to our run-example
script, it invokes the more general bin/spark-submit
script. For now, let's stick with the run-example
script.
To run GroupByTest
locally, try running the following command:
bin/run-example GroupByTest
This line will produce an output like this given here:
14/11/15 06:28:40 INFO SparkContext: Job finished: count at GroupByTest.scala:51, took 0.494519333 s 2000
Note
All the examples in this book can be run on a Spark installation on a local machine. So you can read through the rest of the chapter for additional information after you have gotten some hands-on exposure to Spark running on your local machine.
- iOS面試一戰到底
- Intel Galileo Essentials
- Java 開發從入門到精通(第2版)
- ThinkPHP 5實戰
- Learn to Create WordPress Themes by Building 5 Projects
- Linux C/C++服務器開發實踐
- Learning Selenium Testing Tools with Python
- Learning Bayesian Models with R
- 嚴密系統設計:方法、趨勢與挑戰
- 軟件項目管理實用教程
- JavaScript動態網頁開發詳解
- Qlik Sense? Cookbook
- 零基礎學C語言(升級版)
- 從零開始學Selenium自動化測試:基于Python:視頻教學版
- Scratch編程從入門到精通