- Python Machine Learning By Example
- Yuxi (Hayden) Liu
- 404字
- 2021-07-02 22:57:18
Avoid overfitting with feature selection and dimensionality reduction
We typically represent the data as a grid of numbers (a matrix). Each column represents a variable, which we call a feature in machine learning. In supervised learning, one of the variables is actually not a feature but the label that we are trying to predict. And in supervised learning, each row is an example that we can use for training or testing. The number of features corresponds to the dimensionality of the data. Our machine learning approach depends on the number of dimensions versus the number of examples. For instance, text and image data are very high dimensional, while stock market data has relatively fewer dimensions. Fitting high dimensional data is computationally expensive and is also prone to overfitting due to high complexity. Higher dimensions are also impossible to visualize, and therefore, we can't use simple diagnostic methods.
Not all the features are useful, and they may only add randomness to our results. It is, therefore, often important to do good feature selection. Feature selection is the process of picking a subset of significant features for use in better model construction. In practice, not every feature in a dataset carries information useful for discriminating samples; some features are either redundant or irrelevant and hence can be discarded with little loss.
In principle, feature selection boils down to multiple binary decisions: whether to include a feature or not. For n features, we get 2n feature sets, which can be a very large number for a large number of features. For example, for 10 features, we have 1,024 possible feature sets (for instance, if we are deciding what clothes to wear, the features can be temperature, rain, the weather forecast, where we are going, and so on). At a certain point, brute force evaluation becomes infeasible. We will discuss better methods in Chapter 6, Click-Through Prediction with Logistic Regression, in this book. Basically, we have two options: we either start with all the features and remove features iteratively or we start with a minimum set of features and add features iteratively. We then take the best feature sets for each iteration and then compare them.
Another common approach of reducing dimensionality reduction approach is to transform high-dimensional data in lower-dimensional space. This transformation leads to information loss, but we can keep the loss to a minimum. We will cover this in more detail later on.
- ASP.NET Core:Cloud-ready,Enterprise Web Application Development
- TypeScript Blueprints
- 構(gòu)建移動網(wǎng)站與APP:HTML 5移動開發(fā)入門與實戰(zhàn)(跨平臺移動開發(fā)叢書)
- ASP.NET程序設(shè)計教程
- Android程序設(shè)計基礎(chǔ)
- Visual C++開發(fā)入行真功夫
- 搞定J2EE:Struts+Spring+Hibernate整合詳解與典型案例
- Java EE企業(yè)級應(yīng)用開發(fā)教程(Spring+Spring MVC+MyBatis)
- Geospatial Development By Example with Python
- JavaScript+jQuery網(wǎng)頁特效設(shè)計任務(wù)驅(qū)動教程
- Python Machine Learning Blueprints:Intuitive data projects you can relate to
- Vue.js 3應(yīng)用開發(fā)與核心源碼解析
- Mastering Leap Motion
- Java程序設(shè)計
- Spring Boot 3:入門與應(yīng)用實戰(zhàn)