- Apache Oozie Essentials
- Jagat Jasjit Singh
- 369字
- 2021-07-30 09:58:22
Book case study
Throughout this book, we will try to solve case study that will revolve around various concepts of Oozie.
One of the main use cases of Hadoop is ETL data processing.
Suppose we work for a large consulting company and have won a project to set up a Big Data cluster inside the customer data center. On a high level, the requirements are to set up an environment that will satisfy the following flow:
- Get data from various sources in Hadoop (file-based loads and Sqoop-based loads).
- Preprocess them with various scripts (Pig, Hive, and MapReduce).
- Insert that data into Hive tables for use by analysts and data scientists.
- Data scientists then write machine learning models (Spark).
We will use Oozie as our processing scheduling system to do all the preceding tasks. Since writing actual Hive, Sqoop, MapReduce, Pig, and Spark code is not in the scope of this book, I will not dive into explaining business logic for those. So I have kept them very simple.
In our architecture, we have one landing server that sits outside as the front door of the cluster. All source systems send files to us via scp and we regularly (for example, nightly to keep it simple) push them to HDFS using the hadoop fs -copyFromLocal
command. This script is cron-driven. It has a very simple business logic: run every night at 8:00 P.M. and move all the files that it sees on the landing server into HDFS.
The work of Oozie starts from this point:
- Oozie picks the file and cleans it using Pig Script to replace all the delimiters, from comma (
,
) to pipes (|
). We will write the same code using Pig and MapReduce. - Then, push those processed files into a Hive table.
- For different source systems which are database-based MySQL tables, we do nightly Sqoop when the load of the database is light. So, we extract all the records that have been generated on the previous business day.
- We insert the output of that too into Hive tables.
- Analyst and data scientists write there magical Hive scripts and Spark machine learning models on those Hive tables.
- We will use Oozie to schedule all of these regular tasks.
- Java程序設計實戰教程
- The Modern C++ Challenge
- Swift 3 New Features
- Mastering LibGDX Game Development
- Quarkus實踐指南:構建新一代的Kubernetes原生Java微服務
- Python數據可視化之Matplotlib與Pyecharts實戰
- PostgreSQL Replication(Second Edition)
- 代替VBA!用Python輕松實現Excel編程
- Python Interviews
- FPGA嵌入式項目開發實戰
- Swift語言實戰晉級
- Lift Application Development Cookbook
- Julia High Performance(Second Edition)
- JavaScript設計模式與開發實踐
- 微信公眾平臺開發最佳實踐