- Learning Apache Spark 2
- Muhammad Asif Abbasi
- 470字
- 2021-07-09 18:45:56
Running Spark examples
Spark comes with packaged examples for Java, Python, Scala, and R. We'll demonstrate how you can run a program provided in the examples directory.
As we only have a local installation, we'll run the Spark PI example locally on 4 cores. The examples are available at the Apache Spark GitHub page http://bit.ly/28S1hDY. We've taken an excerpt out of the example to explain how SparkContext
is initialized:
val conf = new SparkConf().setAppName("Spark Pi") val spark = new SparkContext(conf)
The example comes packaged with Spark binaries. The code can be downloaded from GitHub too. Looking closely at the code you will realize that we instantiate our own SparkContext
object from a SparkConf
object. The application name Spark PI
will appear in the Spark UI as a running application during the execution, and will help you track the status of your job. Remember, this is in stark contrast to the spark-shell where a SparkContext
is automatically instantiated and passed as a reference.
Let's run this example with Spark submit script:

The log of the script spans over multiple pages, so we will skip over the intermediate manipulation step and go to the part where the output is printed. Remember in this case we are running Spark Pi
, which prints out a value of Pi
. Here's the second part of the log:

Figure 1.17: Running Spark Pi example
At the moment we have seen an example in Scala. If we see the example for this in Python, you will realize that we will just need to pass in the Python source code. We do not have to pass in any JAR files, as we are not referencing any other code. Similar to the Scala example, we have to instantiate the SparkContext
directly, which is unlike how PySpark
shell automatically provides you with a reference to the context object:
sc = SparkContext(appName="PythonPi")
Running the Spark Pi
example is a bit different to the Scala example:

Similar to the PySpark
example, the log of the SparkPi
program in spark-shell spans multiple pages. We'll just move directly to the part where the value of Pi
is printed in the log:

Figure 1.18: Running Spark Pi Python example
Building your own programs
We have tested pre-compiled programs but, as discussed earlier in this chapter, you can create your own programs and use sbt
or Maven
to package the application together and run using spark-submit script. In the later chapters in this book, we will use both the REPL environments and spark-submit for various code examples. For a complete code example, we'll build a Recommendation system in Chapter 9, Building a Recommendation System, and predict customer churn in a telco
environment in Chapter 10, Customer Churn Prediction. Both of these examples (though fictional) will help you understand the overall life cycle of a machine learning application.
- 高效能辦公必修課:Word圖文處理
- Mastering Spark for Data Science
- 網頁編程技術
- 并行數據挖掘及性能優化:關聯規則與數據相關性分析
- Supervised Machine Learning with Python
- JavaScript典型應用與最佳實踐
- Word 2007,Excel 2007辦公應用融會貫通
- Linux系統管理員工具集
- 生物3D打印:從醫療輔具制造到細胞打印
- 寒江獨釣:Windows內核安全編程
- WOW!Photoshop CS6完全自學寶典
- 電腦故障排除與維護終極技巧金典
- 30天學通Java Web項目案例開發
- 筆記本電腦使用與維護
- Generative Adversarial Networks Projects