- Machine Learning Quick Reference
- Rahul Kumar
- 187字
- 2021-08-20 10:05:06
Ridge regression (L2)
Due to ridge regression, we need to make some changes to the loss function. The original loss function gets added by a shrinkage component:

Now, this modified loss function needs to be minimized to adjust the estimates or coefficients. Here, the lambda is tuning the parameter that regularizes the loss function. That is, it decides how much it should penalize the flexibility of the model. The flexibility of the model is dependent on the coefficients. If the coefficients of the model go up, the flexibility also goes up, which isn't a good sign for our model. Likewise, as the coefficients go down, the flexibility is restricted and the model starts to perform better. The shrinkage of each estimated parameter makes the model better here, and this is what ridge regression does. When lambda keeps going higher and higher, that is, λ → ∞, the penalty component rises, and the estimates start shrinking. However, when λ → 0, the penalty component decreases and starts to become an ordinary least square (OLS) for estimating unknown parameters in a linear regression.
- Instant Raspberry Pi Gaming
- PowerShell 3.0 Advanced Administration Handbook
- Learning Apache Cassandra(Second Edition)
- 工業機器人現場編程(FANUC)
- STM32嵌入式微控制器快速上手
- 數據挖掘方法及天體光譜挖掘技術
- Implementing AWS:Design,Build,and Manage your Infrastructure
- Ruby on Rails敏捷開發最佳實踐
- Chef:Powerful Infrastructure Automation
- 在實戰中成長:C++開發之路
- 貫通Hibernate開發
- 企業級Web開發實戰
- Mastering Kubernetes
- Office 2007典型應用四合一
- 仿魚機器人的設計與制作