官术网_书友最值得收藏!

Chapter 3. Input Formats and Schema

The aim of this chapter is to demonstrate how to load data from its raw format onto different schemas, therefore enabling a variety of different kinds of downstream analytics to be run over the same data. When writing analytics, or even better, building libraries of reusable software, you generally have to work with interfaces of fixed input types. Therefore, having flexibility in how you transition data between schemas, depending on the purpose, can deliver considerable downstream value, both in terms of widening the type of analysis possible and the re-use of existing code.

Our primary objective is to learn about the data format features that accompany Spark, although we will also delve into the finer points of data management by introducing proven methods that will enhance your data handling and increase your productivity. After all, it is most likely that you will be required to formalize your work at some point, and an introduction to how to avoid the potential long-term pitfalls is invaluable when writing analytics, and long after.

With this is mind, we will use this chapter to look at the traditionally well understood area of data schemas. We will cover key areas of traditional database modeling and explain how some of these cornerstone principles are still applicable to Spark.

In addition, while honing our Spark skills, we will analyze the GDELT data model and show how to store this large dataset in an efficient and scalable manner.

We will cover the following topics:

  • Dimensional modeling: benefits and weaknesses in relation to Spark
  • Focus on the GDELT model
  • Lifting the lid on schema-on-read
  • Avro object model
  • Parquet storage model

Let's start with some best practice.

主站蜘蛛池模板: 安图县| 洛扎县| 海门市| 高陵县| 紫金县| 肥东县| 梅河口市| 阿坝| 武威市| 赤壁市| 喀喇| 江门市| 隆尧县| 万源市| 富裕县| 宁远县| 仁寿县| 铁岭市| 南充市| 峨边| 通榆县| 威信县| 永吉县| 福州市| 仁寿县| 泸西县| 南川市| 清水县| 安西县| 镇安县| 高陵县| 芜湖县| 榆树市| 宜兰县| 潞城市| 南乐县| 资溪县| 浮梁县| 汝阳县| 莒南县| 合肥市|