官术网_书友最值得收藏!

Feature transformation – enter math-man

This chapter is where things get mathematical and interesting. We'll have talked about understating features and cleaning them. We'll also have looked at how to remove and add new features. In our feature construction chapter, we had to manually create these new features. We, the human, had to use our brains and come up with those three ways of decomposing that image of a stop sign. Sure, we can create code that makes the features automatically, but we ultimately chose what features we wanted to use. 

This chapter will start to look at the automatic creation of these features as it applies to mathematical dimensionality. If we regard our data as vectors in an n-space (n being the number of columns), we will ask ourselves, can we create a new dataset in a k-space (where k < n) that fully or nearly represents the original data, but might give us speed boosts or performance enhancements in machine learning? The goal here is to create a dataset of smaller dimensionality that performs better than our original dataset at a larger dimensionality.

The first question here is, weren't we creating data in smaller dimensionality before when we were feature selecting? If we start with 17 features and remove five, we've reduced the dimensionality to 12, right? Yes, of course! However, we aren't talking simply about removing columns here, we are talking about using complex mathematical transformations (usually taken from our studies in linear algebra) and applying them to our datasets.

One notable example we will spend some time on is called Principal Components Analysis (PCA)It is a transformation that breaks down our data into three different datasets, and we can use these results to create brand new datasets that can outperform our original!

Here is a visual example is taken from a Princeton University research experiment that used PCA to exploit patterns in gene expressions. This is a great application of dimensionality reduction as there are so many genes and combinations of genes, it would take even the most sophisticated algorithms in the world plenty of time to process them:

In the preceding screenshot, A represents the original dataset, where U, W, and VT represent the results of a singular value decomposition. The results are then put together to make a brand new dataset that can replace A to a certain extent.

主站蜘蛛池模板: 开封市| 元氏县| 沅陵县| 临颍县| 梁平县| 梓潼县| 图木舒克市| 孟村| 沁源县| 休宁县| 苏州市| 和林格尔县| 上栗县| 高唐县| 万年县| 宜丰县| 上犹县| 舞阳县| 调兵山市| 邯郸市| 珲春市| 渑池县| 通江县| 孟连| 洛扎县| 汕头市| 蓬莱市| 于田县| 许昌市| 东乌| 邓州市| 丁青县| 黎川县| 昭苏县| 江山市| 棋牌| 滁州市| 铜川市| 铜川市| 调兵山市| 大足县|