官术网_书友最值得收藏!

LASSO

LASSO applies the L1-norm instead of the L2-norm as in ridge regression, which is the sum of the absolute value of the feature weights and thus minimizes RSS + λ(sum |Bj|). This shrinkage penalty will indeed force a feature weight to zero. This is a clear advantage over ridge regression, as it may greatly improve the model interpretability.

The mathematics behind the reason that the L1-norm allows the weights/coefficients to become zero, is out of the scope of this book (refer to Tibsharini, 1996 for further details).

If LASSO is so great, then ridge regression must be clearly obsolete. Not so fast! In a situation of high collinearity or high pairwise correlations, LASSO may force a predictive feature to zero and thus you can lose the predictive ability; that is, say if both feature A and B should be in your model, LASSO may shrink one of their coefficients to zero. The following quote sums up this issue nicely:

"One might expect the lasso to perform better in a setting where a relatively small number of predictors have substantial coefficients, and the remaining predictors have coefficients that are very small or that equal zero. Ridge regression will perform better when the response is a function of many predictors, all with coefficients of roughly equal size."
                                                                                                                     -(James, 2013)

There is the possibility of achieving the best of both the worlds and that leads us to the next topic, elastic net.

主站蜘蛛池模板: 尼勒克县| 玛沁县| 大渡口区| 宝清县| 利津县| 丹棱县| 疏勒县| 大化| 青海省| 邯郸市| 青海省| 天峨县| 弋阳县| 玉山县| 黑龙江省| 合阳县| 丹寨县| 凭祥市| 监利县| 会昌县| 农安县| 宣武区| 岳普湖县| 榆林市| 宁乡县| 灌南县| 红河县| 长沙县| 乌恰县| 义马市| 怀宁县| 尼玛县| 大安市| 武功县| 盘山县| 杂多县| 海城市| 班戈县| 通道| 万年县| 清新县|