Introduction
In the previous chapter, we discussed the layers of a data-driven system and explained the important storage requirements for each layer. The storage containers in the data layers of AI solutions serve one main purpose: to build and train models that can run in a production environment. In this chapter, we will discuss how to transfer data between the layers in a pipeline so that the data is prepared to be used to train a model to create an actual forecast (called the execution or scoring of the model).
In an Artificial Intelligence (AI) system, data is continuously updated. Once data enters the system via an upload, application program interface (API), or data stream, it has to be stored securely and typically goes through a few ETL steps. In systems that handle streaming data, the incoming data has to be directed into a stable and usable data pipeline. Data transformations have to be managed, scheduled, and orchestrated. Further, the lineage of the data has to be stored to trace back the origins of a data point in a report or application. This chapter explains all data preparation (sometimes called pre-processing) mechanisms that ensure raw data can be used for machine learning by data scientists. This is important since raw data is hardly in a form that can be used by models. We will elaborate on the architecture and technology as explained by the layered model in Chapter 1, Data Storage Fundamentals. To start with, let's pe into the details of ETL.
- Arduino入門基礎教程
- The Applied AI and Natural Language Processing Workshop
- Intel FPGA/CPLD設計(高級篇)
- 精選單片機設計與制作30例(第2版)
- Unity 5.x Game Development Blueprints
- 計算機組裝與維修技術
- Practical Machine Learning with R
- SiFive 經典RISC-V FE310微控制器原理與實踐
- Internet of Things Projects with ESP32
- 無蘋果不生活:OS X Mountain Lion 隨身寶典
- Java Deep Learning Cookbook
- 計算機組裝與維護(慕課版)
- The Deep Learning with PyTorch Workshop
- 施耐德M241/251可編程序控制器應用技術
- 多媒體應用技術(第2版)