官术网_书友最值得收藏!

Feature transformation – enter math-man

This chapter is where things get mathematical and interesting. We'll have talked about understating features and cleaning them. We'll also have looked at how to remove and add new features. In our feature construction chapter, we had to manually create these new features. We, the human, had to use our brains and come up with those three ways of decomposing that image of a stop sign. Sure, we can create code that makes the features automatically, but we ultimately chose what features we wanted to use. 

This chapter will start to look at the automatic creation of these features as it applies to mathematical dimensionality. If we regard our data as vectors in an n-space (n being the number of columns), we will ask ourselves, can we create a new dataset in a k-space (where k < n) that fully or nearly represents the original data, but might give us speed boosts or performance enhancements in machine learning? The goal here is to create a dataset of smaller dimensionality that performs better than our original dataset at a larger dimensionality.

The first question here is, weren't we creating data in smaller dimensionality before when we were feature selecting? If we start with 17 features and remove five, we've reduced the dimensionality to 12, right? Yes, of course! However, we aren't talking simply about removing columns here, we are talking about using complex mathematical transformations (usually taken from our studies in linear algebra) and applying them to our datasets.

One notable example we will spend some time on is called Principal Components Analysis (PCA)It is a transformation that breaks down our data into three different datasets, and we can use these results to create brand new datasets that can outperform our original!

Here is a visual example is taken from a Princeton University research experiment that used PCA to exploit patterns in gene expressions. This is a great application of dimensionality reduction as there are so many genes and combinations of genes, it would take even the most sophisticated algorithms in the world plenty of time to process them:

In the preceding screenshot, A represents the original dataset, where U, W, and VT represent the results of a singular value decomposition. The results are then put together to make a brand new dataset that can replace A to a certain extent.

主站蜘蛛池模板: 枣庄市| 正定县| 通渭县| 綦江县| 广安市| 望城县| 万载县| 龙井市| 依安县| 扎赉特旗| 饶阳县| 揭阳市| 南岸区| 福鼎市| 海林市| 徐水县| 乌鲁木齐县| 河间市| 游戏| 峨山| 白玉县| 扎兰屯市| 双流县| 桦南县| 焦作市| 都兰县| 兴山县| 吉木乃县| 十堰市| 高安市| 富源县| 麦盖提县| 兴安盟| 弋阳县| 鹿邑县| 库尔勒市| 新竹县| 外汇| 巧家县| 崇信县| 阿瓦提县|