- Feature Engineering Made Easy
- Sinan Ozdemir Divya Susarla
- 400字
- 2021-06-25 22:45:52
Feature transformation – enter math-man
This chapter is where things get mathematical and interesting. We'll have talked about understating features and cleaning them. We'll also have looked at how to remove and add new features. In our feature construction chapter, we had to manually create these new features. We, the human, had to use our brains and come up with those three ways of decomposing that image of a stop sign. Sure, we can create code that makes the features automatically, but we ultimately chose what features we wanted to use.
This chapter will start to look at the automatic creation of these features as it applies to mathematical dimensionality. If we regard our data as vectors in an n-space (n being the number of columns), we will ask ourselves, can we create a new dataset in a k-space (where k < n) that fully or nearly represents the original data, but might give us speed boosts or performance enhancements in machine learning? The goal here is to create a dataset of smaller dimensionality that performs better than our original dataset at a larger dimensionality.
The first question here is, weren't we creating data in smaller dimensionality before when we were feature selecting? If we start with 17 features and remove five, we've reduced the dimensionality to 12, right? Yes, of course! However, we aren't talking simply about removing columns here, we are talking about using complex mathematical transformations (usually taken from our studies in linear algebra) and applying them to our datasets.
One notable example we will spend some time on is called Principal Components Analysis (PCA). It is a transformation that breaks down our data into three different datasets, and we can use these results to create brand new datasets that can outperform our original!
Here is a visual example is taken from a Princeton University research experiment that used PCA to exploit patterns in gene expressions. This is a great application of dimensionality reduction as there are so many genes and combinations of genes, it would take even the most sophisticated algorithms in the world plenty of time to process them:
In the preceding screenshot, A represents the original dataset, where U, W, and VT represent the results of a singular value decomposition. The results are then put together to make a brand new dataset that can replace A to a certain extent.
- Python金融大數據分析(第2版)
- 數據化網站運營深度剖析
- 數據庫系統原理及應用教程(第4版)
- WS-BPEL 2.0 Beginner's Guide
- 數據挖掘原理與SPSS Clementine應用寶典
- 一個64位操作系統的設計與實現
- 智能數據時代:企業大數據戰略與實戰
- 大數據架構商業之路:從業務需求到技術方案
- 達夢數據庫運維實戰
- Hadoop大數據開發案例教程與項目實戰(在線實驗+在線自測)
- Chef Essentials
- 視覺大數據智能分析算法實戰
- 智慧城市中的大數據分析技術
- Visual Studio 2012 and .NET 4.5 Expert Development Cookbook
- 碼上行動:利用Python與ChatGPT高效搞定Excel數據分析