- Advanced Machine Learning with R
- Cory Lesmeister Dr. Sunil Kumar Chinnamgari
- 208字
- 2021-06-24 14:24:39
Elastic net
The power of elastic net is that it performs feature extraction, unlike ridge regression, and it'll group the features that LASSO fails to do. Again, LASSO will tend to select one feature from a group of correlated ones and ignore the rest. Elastic net does this by including a mixing parameter, alpha, in conjunction with lambda. Alpha will be between 0 and 1, and as before, lambda will regulate the size of the penalty. Please note that an alpha of zero is equal to ridge regression and an alpha of 1 is equivalent to LASSO. Essentially, we're blending the L1 and L2 penalties by including a second tuning parameter with a quadratic (squared) term of the beta coefficients. We'll end up with the goal of minimizing (RSS + λ[(1-alpha) (sum|Bj|2)/2 + alpha (sum |Bj|)])/N).
Let's put these techniques to the test. We'll utilize a dataset I created to demonstrate the methods. In the next section, I'll discuss how I created the dataset with a few predictive features and some noise features, including those with high correlation. I recommend that, once you feel comfortable with this chapter's content, you go back and apply them to the data examined in the prior two chapters, comparing performance.
- 新媒體跨界交互設計
- Istio入門與實戰
- 辦公通信設備維修
- Getting Started with Qt 5
- 電腦組裝、維護、維修全能一本通(全彩版)
- Camtasia Studio 8:Advanced Editing and Publishing Techniques
- The Deep Learning with Keras Workshop
- 單片機系統設計與開發教程
- Hands-On Artificial Intelligence for Banking
- The Artificial Intelligence Infrastructure Workshop
- FPGA實驗實訓教程
- Instant Website Touch Integration
- 基于S5PV210處理器的嵌入式開發完全攻略
- 從企業級開發到云原生微服務:Spring Boot實戰
- 超炫的35個Arduino制作項目