官术网_书友最值得收藏!

  • Learning Apache Spark 2
  • Muhammad Asif Abbasi
  • 375字
  • 2021-07-09 18:45:58

Passing functions to Spark (Python)

Python provides a simple way to pass functions to Spark. The Spark programming guide available at spark.apache.org suggests there are three recommended ways to do this:

  • Lambda expressions is the ideal way for short functions that can be written inside a single expression
  • Local defs inside the function calling into Spark for longer code
  • Top-level functions in a module

While we have already looked at the lambda functions in some of the previous examples, let's look at local definitions of the functions. We can encapsulate our business logic which is splitting of words, and counting into two separate functions as shown below.

def splitter(lineOfText): words = lineOfText.split(" ") return len(words) def aggregate(numWordsLine1, numWordsLineNext): totalWords = numWordsLine1 + numWordsLineNext return totalWords 

Let's see the working code example:

Figure 2.15: Code example of Python word count (local definition of functions)

Here's another way to implement this by defining the functions as a part of a UtilFunctions class, and referencing them within your map and reduce functions:

Figure 2.16: Code example of Python word count (Utility class)

You may want to be a bit cheeky here and try to add a countWords() to the UtilFunctions, so that it takes an RDD as input, and returns the total number of words. This method has potential performance implications as the whole object will need to be sent to the cluster. Let's see how this can be implemented and the results in the following screenshot:

Figure 2.17: Code example of Python word count (Utility class - 2)

This can be avoided by making a copy of the referenced data field in a local object, rather than accessing it externally.

Now that we have had a look at how to pass functions to Spark, and have already looked at some of the transformations and actions in the previous examples, including map, flatMap, and reduce, let's look at the most common transformations and actions used in Spark. The list is not exhaustive, and you can find more examples in the Apache Spark documentation in the programming guide section (http://bit.ly/SparkProgrammingGuide). If you would like to get a comprehensive list of all the available functions, you might want to check the following API docs:

Table 2.1 - RDD and PairRDD API references

主站蜘蛛池模板: 石渠县| 嘉荫县| 庄浪县| 龙川县| 山丹县| 上思县| 富宁县| 开江县| 华蓥市| 双牌县| 东丰县| 永登县| 咸宁市| 明光市| 资兴市| 永城市| 左贡县| 台江县| 沁阳市| 台南县| 宁化县| 乐陵市| 汝州市| 红安县| 东阿县| 五指山市| 武清区| 皮山县| 普格县| 阆中市| 鸡东县| 德清县| 广东省| 兴国县| 保山市| 察哈| 荥阳市| 永嘉县| 鄱阳县| 贵南县| 哈密市|