官术网_书友最值得收藏!

Feature improvement – cleaning datasets

In this topic, we take the results of our understanding of the data and use them in order to clean the dataset. Much of this book will flow in such a way, using results from previous sections to be able to work on current sections. In feature improvement, our understanding will allow us to begin our first manipulations of datasets. We will be using mathematical transformations to enhance the given data, but not remove or insert any new attributes (this is for the next chapters).

We will explore several topics in this section, including:

  • Structuring unstructured data
  • Data imputing—inserting data where there was not a data before (missing data)
  • Normalization of data:
    • Standardization (known as z-score normalization)
    • Min-max scaling
    • L1 and L2 normalization (projecting into different spaces, fun stuff)

By this point in the book, we will be able to identify whether our data has a structure or not. That is, whether our data is in a nice, tabular format. If it is not, this chapter will give us the tools to transform that data into a more tabular format. This is imperative when attempting to create machine learning pipelines.

Data imputing is a particularly interesting topic. The ability to fill in data where data was missing previously is trickier than it sounds. We will be proposing all kinds of solutions from the very, very easy, merely removing the column altogether, boom no more missing data, to the interestingly complex, using machine learning on the rest of the features to fill in missing spots. Once we have filled in a bulk of our missing data, we can then measure how that affected our machine learning algorithms.

Normalization uses (generally simple) mathematical tools used to change the scaling of our data. Again, this ranges from the easy, turning miles into feet or pounds into kilograms, to the more difficult, such as projecting our data onto the unit sphere (more on that to come).

This chapter and remaining chapters will be much more heavily focused on our quantitative feature engineering procedure evaluation flow. Nearly every single time we look at a new dataset or feature engineering procedure, we will put it to the test. We will be grading the performance of various feature engineering methods on the merits of machine learning performance, speed, and other metrics. This text should only be used as a reference and not as a guide to select with feature engineering the procedures you are allowed to ignore based on difficulty and change in performance. Every new data task comes with its own caveats and may require different procedures than the previous data task.

主站蜘蛛池模板: 射洪县| 礼泉县| 马鞍山市| 平顶山市| 定边县| 荃湾区| 佳木斯市| 兴国县| 东宁县| 太原市| 武川县| 永福县| 定兴县| 朝阳区| 玉林市| 邯郸市| 雅江县| 嫩江县| 新密市| 梁河县| 新巴尔虎右旗| 通化县| 开封市| 桐乡市| 尖扎县| 萍乡市| 齐河县| 谷城县| 琼结县| 罗平县| 阆中市| 赫章县| 雅江县| 平阴县| 临海市| 怀仁县| 略阳县| 赫章县| 东至县| 五河县| 灵丘县|