- Deep Learning with R for Beginners
- Mark Hodnett Joshua F. Wiley Yuxi (Hayden) Liu Pablo Maldonado
- 159字
- 2021-06-24 14:30:42
Using regularization to overcome overfitting
In the previous chapter, we saw the diminishing returns from further training iterations on neural networks in terms of their predictive ability on holdout or test data (that is, data not used to train the model). This is because complex models may memorize some of the noise in the data rather than learning the general patterns. These models then perform much worse when predicting new data. There are some methods we can apply to make our model generalize, that is, fit the overall patterns. These are called regularization and aim to reduce testing errors so that the model performs well on new data.
The most common regularization technique used in deep learning is dropout. However, we will also discuss two other regularization techniques that have a basis in regression and deep learning. These two regularization techniques are L1 penalty, which is also known as Lasso, and L2 penalty, which is also known as Ridge.
- Hands-On Data Structures and Algorithms with Rust
- 大數據時代下的智能轉型進程精選(套裝共10冊)
- Sybase數據庫在UNIX、Windows上的實施和管理
- Spark大數據編程實用教程
- 大數據精準挖掘
- R語言數據挖掘
- 聯動Oracle:設計思想、架構實現與AWR報告
- 計算機視覺
- 智慧城市中的大數據分析技術
- 機器學習:實用案例解析
- Deep Learning with R for Beginners
- 區塊鏈應用開發指南:業務場景剖析與實戰
- Practical Convolutional Neural Networks
- ORACLE 11g權威指南
- Configuration Management with Chef-Solo