官术网_书友最值得收藏!

Importing data from another Hadoop cluster

Sometimes, we may want to copy data from one HDFS to another either for development, testing, or production migration. In this recipe, we will learn how to copy data from one HDFS cluster to another.

Getting ready

To perform this recipe, you should already have a running Hadoop cluster.

How to do it...

Hadoop provides a utility called DistCp, which helps us copy data from one cluster to another. Using this utility is as simple as copying from one folder to another:

hadoop distcp hdfs://hadoopCluster1:9000/source hdfs://hadoopCluster2:9000/target

This would use a Map Reduce job to copy data from one cluster to another. You can also specify multiple source files to be copied to the target. There are a couple of other options that we can also use:

  • -update: When we use DistCp with the update option, it will copy only those files from the source that are not part of the target or differ from the target.
  • -overwrite: When we use DistCp with the overwrite option, it overwrites the target directory with the source.

How it works...

When DistCp is executed, it uses map reduce to copy the data and also assists in error handling and reporting. It expands the list of source files and directories and inputs them to map tasks. When copying from multiple sources, collisions are resolved in the destination based on the option (update/overwrite) that's provided. By default, it skips if the file is already present at the target. Once the copying is complete, the count of skipped files is presented.

Note

You can read more on DistCp at https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html.

主站蜘蛛池模板: 鄢陵县| 甘谷县| 乌苏市| 云南省| 鸡泽县| 富阳市| 始兴县| 宁南县| 三原县| 通海县| 高州市| 江安县| 进贤县| 三江| 墨玉县| 上饶县| 沽源县| 佛坪县| 夏邑县| 阿克| 五大连池市| 保亭| 霞浦县| 皋兰县| 临城县| 东莞市| 天水市| 福海县| 监利县| 上高县| 巴彦县| 平顶山市| 揭阳市| 锡林浩特市| 兖州市| 临汾市| 镇远县| 高唐县| 铅山县| 阿克苏市| 漯河市|