官术网_书友最值得收藏!

  • Spark Cookbook
  • Rishi Yadav
  • 309字
  • 2021-07-16 13:44:01

Loading data from the local filesystem

Though the local filesystem is not a good fit to store big data due to disk size limitations and lack of distributed nature, technically you can load data in distributed systems using the local filesystem. But then the file/directory you are accessing has to be available on each node.

Please note that if you are planning to use this feature to load side data, it is not a good idea. To load side data, Spark has a broadcast variable feature, which will be discussed in upcoming chapters.

In this recipe, we will look at how to load data in Spark from the local filesystem.

How to do it...

Let's start with the example of Shakespeare's "to be or not to be":

  1. Create the words directory by using the following command:
    $ mkdir words
    
  2. Get into the words directory:
    $ cd words
    
  3. Create the sh.txt text file and enter "to be or not to be" in it:
    $ echo "to be or not to be" > sh.txt
    
  4. Start the Spark shell:
    $ spark-shell
    
  5. Load the words directory as RDD:
    scala> val words = sc.textFile("file:///home/hduser/words")
    
  6. Count the number of lines:
    scala> words.count
    
  7. Divide the line (or lines) into multiple words:
    scala> val wordsFlatMap = words.flatMap(_.split("\\W+"))
    
  8. Convert word to (word,1)—that is, output 1 as the value for each occurrence of word as a key:
    scala> val wordsMap = wordsFlatMap.map( w => (w,1))
    
  9. Use the reduceByKey method to add the number of occurrences for each word as a key (this function works on two consecutive values at a time, represented by a and b):
    scala> val wordCount = wordsMap.reduceByKey( (a,b) => (a+b))
    
  10. Print the RDD:
    scala> wordCount.collect.foreach(println)
    
  11. Doing all of the preceding operations in one step is as follows:
    scala> sc.textFile("file:///home/hduser/ words"). flatMap(_.split("\\W+")).map( w => (w,1)). reduceByKey( (a,b) => (a+b)).foreach(println)
    

This gives the following output:

主站蜘蛛池模板: 邳州市| 曲松县| 邮箱| 义乌市| 金寨县| 大港区| 江源县| 珲春市| 车致| 福贡县| 东明县| 弥勒县| 宁海县| 盐源县| 广丰县| 布拖县| 宜宾县| 天峨县| 广宗县| 马山县| 子洲县| 阿城市| 甘洛县| 彝良县| 泰州市| 贡觉县| 阜南县| 武冈市| 邻水| 洛浦县| 山阴县| 博湖县| 南郑县| 汕尾市| 遂川县| 新乡县| 安陆市| 德阳市| 武陟县| 四子王旗| 孝昌县|