- Apache Spark 2.x for Java Developers
- Sourav Gulati Sumit Kumar
- 179字
- 2021-07-02 19:02:00
Creating and filtering RDD
Let's start by creating an RDD of strings:
scala>val stringRdd=sc.parallelize(Array("Java","Scala","Python","Ruby","JavaScript","Java"))
stringRdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:24
Now, we will filter this RDD to keep only those strings that start with the letter J:
scala>valfilteredRdd = stringRdd.filter(s =>s.startsWith("J"))
filteredRdd: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[2] at filter at <console>:26
In the first chapter, we learnt that if an operation on RDD returns an RDD then it is a transformation, or else it is an action.
The output of the preceding command clearly shows that filter the operation returned an RDD so the filter is a transformation.
Now, we will run an action on filteredRdd to see it's elements. Let's run collect on the filteredRdd:
scala>val list = filteredRdd.collect
list: Array[String] = Array(Java, JavaScript, Java)
As per the output of the previous command, the collect operation returned an array of strings. So, it is an action.
Now, let's see the elements of the list variable:
scala> list
res5: Array[String] = Array(Java, JavaScript, Java)
We are left with only elements that start with J, which was our desired outcome:
推薦閱讀
- Mobile Web Performance Optimization
- Kubernetes實戰
- 劍指Offer(專項突破版):數據結構與算法名企面試題精講
- 算法基礎:打開程序設計之門
- Java編程指南:基礎知識、類庫應用及案例設計
- Responsive Web Design with HTML5 and CSS3
- Mastering macOS Programming
- Unreal Engine 4 Shaders and Effects Cookbook
- NGINX Cookbook
- The Professional ScrumMaster’s Handbook
- iPhone應用開發從入門到精通
- Hands-On GUI Programming with C++ and Qt5
- Scala編程實戰
- MongoDB Cookbook(Second Edition)
- Practical Predictive Analytics