- Deep Learning with R for Beginners
- Mark Hodnett Joshua F. Wiley Yuxi (Hayden) Liu Pablo Maldonado
- 159字
- 2021-06-24 14:30:42
Using regularization to overcome overfitting
In the previous chapter, we saw the diminishing returns from further training iterations on neural networks in terms of their predictive ability on holdout or test data (that is, data not used to train the model). This is because complex models may memorize some of the noise in the data rather than learning the general patterns. These models then perform much worse when predicting new data. There are some methods we can apply to make our model generalize, that is, fit the overall patterns. These are called regularization and aim to reduce testing errors so that the model performs well on new data.
The most common regularization technique used in deep learning is dropout. However, we will also discuss two other regularization techniques that have a basis in regression and deep learning. These two regularization techniques are L1 penalty, which is also known as Lasso, and L2 penalty, which is also known as Ridge.
- 計算機組成原理與接口技術:基于MIPS架構實驗教程(第2版)
- Hands-On Machine Learning with Microsoft Excel 2019
- Developing Mobile Games with Moai SDK
- Game Development with Swift
- 企業大數據系統構建實戰:技術、架構、實施與應用
- 業務數據分析:五招破解業務難題
- 數據庫開發實踐案例
- 醫療大數據挖掘與可視化
- 大數據營銷:如何讓營銷更具吸引力
- 大數據數學基礎(Python語言描述)
- Visual FoxPro數據庫技術基礎
- Filecoin原理與實現
- 算力經濟:從超級計算到云計算
- Practical Convolutional Neural Networks
- Configuration Management with Chef-Solo