- Deep Learning with R for Beginners
- Mark Hodnett Joshua F. Wiley Yuxi (Hayden) Liu Pablo Maldonado
- 214字
- 2021-06-24 14:30:43
L2 penalty in action
To see how the L2 penalty works, we can use the same simulated linear regression problem we used for the Ll penalty. To fit a ridge regression model, we use the glmnet() function from the glmnet package. As mentioned previously, this function can actually fit the L1 or the L2 penalties, and which one occurs is determined by the argument, alpha. When alpha = 1, it fits lasso, and when alpha = 0, it fits ridge regression. This time, we choose alpha = 0. Again, we evaluate a range of lambda options and tune this hyper-parameter automatically using cross-validation. This is accomplished by using the cv.glmnet() function. We plot the ridge regression object to see the error for a variety of lambda values:
m.ridge.cv <- cv.glmnet(X[1:100, ], y[1:100], alpha = 0)
plot(m.ridge.cv)

Although the shape is different from lasso in that the error appears to asymptote for higher lambda values, it is still clear that, when the penalty gets too high, the cross-validated model error increases. As with lasso, the ridge regression model seems to do well with very low lambda values, perhaps indicating the L2 penalty does not improve out-of-sample performance/generalizability by much.
Finally, we can compare the OLS coefficients with those from lasso and the ridge regression model:
> cbind(OLS = coef(m.ols),Lasso = coef(m.lasso.cv)[,1],Ridge = coef(m.ridge.cv)[,1])
OLS Lasso Ridge
(Intercept) 2.958 2.99 2.9919
X[1:100, ]1 -0.082 1.41 0.9488
X[1:100, ]2 2.239 0.71 0.9524
X[1:100, ]3 0.602 0.51 0.9323
X[1:100, ]4 1.235 1.17 0.9548
X[1:100, ]5 -0.041 0.00 -0.0023
Although ridge regression does not shrink the coefficient for the fifth predictor to exactly 0, it is smaller than in the OLS, and the remaining parameters are all slightly shrunken, but quite close to their true values of 3, 1, 1, 1, 1, and 0.
- 數(shù)據(jù)庫基礎(chǔ)教程(SQL Server平臺)
- Python數(shù)據(jù)分析入門:從數(shù)據(jù)獲取到可視化
- 從0到1:數(shù)據(jù)分析師養(yǎng)成寶典
- Spark大數(shù)據(jù)分析實戰(zhàn)
- Access 2007數(shù)據(jù)庫應(yīng)用上機(jī)指導(dǎo)與練習(xí)
- R數(shù)據(jù)科學(xué)實戰(zhàn):工具詳解與案例分析(鮮讀版)
- Remote Usability Testing
- Scratch 3.0 藝術(shù)進(jìn)階
- Proxmox VE超融合集群實踐真?zhèn)?/a>
- 數(shù)據(jù)科學(xué)實戰(zhàn)指南
- 新手學(xué)會計(2013-2014實戰(zhàn)升級版)
- Oracle 11g+ASP.NET數(shù)據(jù)庫系統(tǒng)開發(fā)案例教程
- 大數(shù)據(jù)時代系列(套裝9冊)
- 數(shù)據(jù)之美:一本書學(xué)會可視化設(shè)計
- Kafka權(quán)威指南(第2版)