官术网_书友最值得收藏!

Avoid overfitting with feature selection and dimensionality reduction

We typically represent the data as a grid of numbers (a matrix). Each column represents a variable, which we call a feature in machine learning. In supervised learning, one of the variables is actually not a feature but the label that we are trying to predict. And in supervised learning, each row is an example that we can use for training or testing. The number of features corresponds to the dimensionality of the data. Our machine learning approach depends on the number of dimensions versus the number of examples. For instance, text and image data are very high dimensional, while stock market data has relatively fewer dimensions. Fitting high dimensional data is computationally expensive and is also prone to overfitting due to high complexity. Higher dimensions are also impossible to visualize, and therefore, we can't use simple diagnostic methods.

Not all the features are useful, and they may only add randomness to our results. It is, therefore, often important to do good feature selection. Feature selection is the process of picking a subset of significant features for use in better model construction. In practice, not every feature in a dataset carries information useful for discriminating samples; some features are either redundant or irrelevant and hence can be discarded with little loss.

In principle, feature selection boils down to multiple binary decisions: whether to include a feature or not. For n features, we get 2n feature sets, which can be a very large number for a large number of features. For example, for 10 features, we have 1,024 possible feature sets (for instance, if we are deciding what clothes to wear, the features can be temperature, rain, the weather forecast, where we are going, and so on). At a certain point, brute force evaluation becomes infeasible. We will discuss better methods in Chapter 6, Click-Through Prediction with Logistic Regression, in this book. Basically, we have two options: we either start with all the features and remove features iteratively or we start with a minimum set of features and add features iteratively. We then take the best feature sets for each iteration and then compare them.

Another common approach of reducing dimensionality reduction approach is to transform high-dimensional data in lower-dimensional space. This transformation leads to information loss, but we can keep the loss to a minimum. We will cover this in more detail later on.

主站蜘蛛池模板: 河池市| 闵行区| 喀喇沁旗| 景德镇市| 汉川市| 楚雄市| 芷江| 息烽县| 凤阳县| 渭南市| 临漳县| 清流县| 贵德县| 汝阳县| 永寿县| 清涧县| 阳新县| 和田市| 吉水县| 禄劝| 连山| 香港 | 通城县| 聂荣县| 乌鲁木齐县| 冷水江市| 揭东县| 洞头县| 鞍山市| 鹿邑县| 始兴县| 巫山县| 高雄县| 绥芬河市| 广安市| 封开县| 罗定市| 泽库县| 塔河县| 天水市| 宣恩县|