- R Deep Learning Essentials
- Mark Hodnett Joshua F. Wiley
- 159字
- 2021-08-13 15:34:33
Using regularization to overcome overfitting
In the previous chapter, we saw the diminishing returns from further training iterations on neural networks in terms of their predictive ability on holdout or test data (that is, data not used to train the model). This is because complex models may memorize some of the noise in the data rather than learning the general patterns. These models then perform much worse when predicting new data. There are some methods we can apply to make our model generalize, that is, fit the overall patterns. These are called regularization and aim to reduce testing errors so that the model performs well on new data.
The most common regularization technique used in deep learning is dropout. However, we will also discuss two other regularization techniques that have a basis in regression and deep learning. These two regularization techniques are L1 penalty, which is also known as Lasso, and L2 penalty, which is also known as Ridge.
- 新型電腦主板關(guān)鍵電路維修圖冊(cè)
- Learning AngularJS Animations
- 電腦組裝與維修從入門到精通(第2版)
- 電腦組裝、維護(hù)、維修全能一本通(全彩版)
- Svelte 3 Up and Running
- Rapid BeagleBoard Prototyping with MATLAB and Simulink
- RISC-V處理器與片上系統(tǒng)設(shè)計(jì):基于FPGA與云平臺(tái)的實(shí)驗(yàn)教程
- 數(shù)字媒體專業(yè)英語(yǔ)(第2版)
- Arduino項(xiàng)目開發(fā):智能生活
- Intel FPGA權(quán)威設(shè)計(jì)指南:基于Quartus Prime Pro 19集成開發(fā)環(huán)境
- The Reinforcement Learning Workshop
- 零基礎(chǔ)輕松學(xué)修電腦主板
- 施耐德M241/251可編程序控制器應(yīng)用技術(shù)
- Learning Microsoft Cognitive Services
- 計(jì)算機(jī)組裝與維護(hù)教程