- Mastering Machine Learning with R(Second Edition)
- Cory Lesmeister
- 316字
- 2021-07-09 18:23:59
Regularization in a nutshell
You may recall that our linear model follows the form, Y = B0 + B1x1 +...Bnxn + e, and also that the best fit tries to minimize the RSS, which is the sum of the squared errors of the actual minus the estimate, or e12 + e22 + ... en2.
With regularization, we will apply what is known as shrinkage penalty in conjunction with the minimization RSS. This penalty consists of a lambda (symbol λ), along with the normalization of the beta coefficients and weights. How these weights are normalized differs in the techniques, and we will discuss them accordingly. Quite simply, in our model, we are minimizing (RSS + λ(normalized coefficients)). We will select λ, which is known as the tuning parameter, in our model building process. Please note that if lambda is equal to 0, then our model is equivalent to OLS, as it cancels out the normalization term.
So what does this do for us and why does it work? First of all, regularization methods are very computationally efficient. In best subsets, we are searching 2p models, and in large datasets, it may not be feasible to attempt. In R, we are only fitting one model to each value of lambda and this is far more efficient. Another reason goes back to our bias-variance trade-off, which was discussed in the preface. In the linear model, where the relationship between the response and the predictors is close to linear, the least squares estimates will have low bias but may have high variance. This means that a small change in the training data can cause a large change in the least squares coefficient estimates (James, 2013). Regularization through the proper selection of lambda and normalization may help you improve the model fit by optimizing the bias-variance trade-off. Finally, regularization of coefficients works to solve multi collinearity problems.
- GitHub Essentials
- Word 2010中文版完全自學(xué)手冊(cè)
- 達(dá)夢(mèng)數(shù)據(jù)庫(kù)編程指南
- Google Visualization API Essentials
- 信息系統(tǒng)與數(shù)據(jù)科學(xué)
- Hadoop與大數(shù)據(jù)挖掘(第2版)
- 大數(shù)據(jù):規(guī)劃、實(shí)施、運(yùn)維
- Oracle高性能自動(dòng)化運(yùn)維
- 跟老男孩學(xué)Linux運(yùn)維:MySQL入門(mén)與提高實(shí)踐
- Python數(shù)據(jù)分析與挖掘?qū)崙?zhàn)(第3版)
- PostgreSQL指南:內(nèi)幕探索
- 大數(shù)據(jù)分析:數(shù)據(jù)倉(cāng)庫(kù)項(xiàng)目實(shí)戰(zhàn)
- Python數(shù)據(jù)分析從小白到專(zhuān)家
- Microsoft Dynamics NAV 2015 Professional Reporting
- 大數(shù)據(jù)計(jì)算系統(tǒng)原理、技術(shù)與應(yīng)用