官术网_书友最值得收藏!

Importing data from another Hadoop cluster

Sometimes, we may want to copy data from one HDFS to another either for development, testing, or production migration. In this recipe, we will learn how to copy data from one HDFS cluster to another.

Getting ready

To perform this recipe, you should already have a running Hadoop cluster.

How to do it...

Hadoop provides a utility called DistCp, which helps us copy data from one cluster to another. Using this utility is as simple as copying from one folder to another:

hadoop distcp hdfs://hadoopCluster1:9000/source hdfs://hadoopCluster2:9000/target

This would use a Map Reduce job to copy data from one cluster to another. You can also specify multiple source files to be copied to the target. There are a couple of other options that we can also use:

  • -update: When we use DistCp with the update option, it will copy only those files from the source that are not part of the target or differ from the target.
  • -overwrite: When we use DistCp with the overwrite option, it overwrites the target directory with the source.

How it works...

When DistCp is executed, it uses map reduce to copy the data and also assists in error handling and reporting. It expands the list of source files and directories and inputs them to map tasks. When copying from multiple sources, collisions are resolved in the destination based on the option (update/overwrite) that's provided. By default, it skips if the file is already present at the target. Once the copying is complete, the count of skipped files is presented.

Note

You can read more on DistCp at https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html.

主站蜘蛛池模板: 图木舒克市| 漳平市| 永嘉县| 彭阳县| 汶川县| 政和县| 彰化市| 嘉禾县| 包头市| 汉源县| 仙居县| 新晃| 济源市| 江都市| 富平县| 东至县| 乡城县| 友谊县| 鹤峰县| 太保市| 鹤岗市| 永嘉县| 桓台县| 浏阳市| 巨野县| 安龙县| 玉龙| 泸水县| 宜良县| 武安市| 无极县| 江北区| 陇南市| 曲阳县| 彭州市| 大关县| 巴青县| 来宾市| 遂平县| 鸡西市| 罗定市|