- Apache Spark 2.x for Java Developers
- Sourav Gulati Sumit Kumar
- 179字
- 2021-07-02 19:02:00
Creating and filtering RDD
Let's start by creating an RDD of strings:
scala>val stringRdd=sc.parallelize(Array("Java","Scala","Python","Ruby","JavaScript","Java"))
stringRdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:24
Now, we will filter this RDD to keep only those strings that start with the letter J:
scala>valfilteredRdd = stringRdd.filter(s =>s.startsWith("J"))
filteredRdd: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[2] at filter at <console>:26
In the first chapter, we learnt that if an operation on RDD returns an RDD then it is a transformation, or else it is an action.
The output of the preceding command clearly shows that filter the operation returned an RDD so the filter is a transformation.
Now, we will run an action on filteredRdd to see it's elements. Let's run collect on the filteredRdd:
scala>val list = filteredRdd.collect
list: Array[String] = Array(Java, JavaScript, Java)
As per the output of the previous command, the collect operation returned an array of strings. So, it is an action.
Now, let's see the elements of the list variable:
scala> list
res5: Array[String] = Array(Java, JavaScript, Java)
We are left with only elements that start with J, which was our desired outcome:
推薦閱讀
- Node.js 10實戰
- 軟件項目管理(第2版)
- Learn to Create WordPress Themes by Building 5 Projects
- Python測試開發入門與實踐
- VMware vSphere 6.7虛擬化架構實戰指南
- Getting Started with SQL Server 2012 Cube Development
- 青少年信息學競賽
- RabbitMQ Essentials
- Learning YARN
- Arduino Wearable Projects
- Python編程快速上手2
- Java Web動態網站開發(第2版·微課版)
- 計算機程序的構造和解釋(JavaScript版)
- Perl 6 Deep Dive
- 算法學習與應用從入門到精通