官术网_书友最值得收藏!

Hadoop streaming

In this recipe, we will look at how we can execute jobs on an Hadoop cluster using scripts written in Bash or Python. It is not mandatory to use only Java for programming MapReduce code; any language can be used by evoking the Hadoop streaming utility. Do not confuse this with real-time streaming, which is different from what we will be discussing here.

Getting ready

To step through the recipes in this chapter, make sure you have a running cluster with HDFS and YARN setup correctly as discussed in the previous chapters. This can be a single node cluster or a multinode cluster, as long as the cluster is configured correctly.

It is not necessary to know Java to run MapReduce programs on Hadoop. Users can carry forward their existing scripting knowledge and use Bash or Python to run the job on Hadoop.

How to do it...

  1. Connect to an edge node in the cluster and switch to user hadoop.
  2. The streaming JAR is also under the location as Hadoop /opt/cluster/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.2.jar.
  3. The map script of the wordcount example using Python is shown in the following screenshot:
    How to do it...
  4. The reduce script is as shown next:
    #!/usr/bin/env python
    
    from operator import itemgetter
    import sys
    
    current_word = None
    current_count = 0
    word = None
    
    # input comes from STDIN
    for line in sys.stdin:
        # remove leading and trailing whitespace
        line = line.strip()
    
        # parse the input we got from mapper.py
        word, count = line.split('\t', 1)
    
       # convert count (currently a string) to int
        try:
            count = int(count)
        except ValueError:
            # count was not a number, so silently
            # ignore/discard this line
            continue
    
        # this IF-switch only works because Hadoop sorts map output
        # by key (here: word) before it is passed to the reducer
        if current_word == word:
            current_count += count
        else:
            if current_word:
                # write result to STDOUT
                print '%s\t%s' % (current_word, current_count)
            current_count = count
            current_word = word
    
    # do not forget to output the last word if needed!
    if current_word == word:
    print '%s\t%s' % (current_word, current_count)
  5. The user can execute the script as shown in the following screenshot:
    How to do it...

How it works...

In this recipe, mapper.py and reducer.py are simple Python scripts, which can be executed directly on the command line, without the need for Hadoop as shown next:

$ cat file | ./mapper.py | ./reducer.py

Here, file is a simple text file. Make sure you understand the indentation in Python to troubleshoot this script.

If the users are finding it difficult to write scripts or configurations, all these are available at GitHub: https://github.com/netxillon/hadoop/tree/master/map_scripts

主站蜘蛛池模板: 镇安县| 利川市| 鄂伦春自治旗| 白城市| 思南县| 如皋市| 定西市| 垦利县| 黔西县| 荔波县| 库伦旗| 湾仔区| 罗源县| 张家界市| 电白县| 观塘区| 神农架林区| 广安市| 彭泽县| 朝阳县| 石景山区| 海宁市| 梁平县| 汽车| 杭锦旗| 棋牌| 蓝田县| 丹巴县| 克东县| 宿松县| 团风县| 新昌县| 越西县| 达日县| 密云县| 宜宾县| 仪陇县| 海晏县| 施甸县| 灌阳县| 建阳市|