- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 220字
- 2021-07-02 12:46:29
Overcoming over-fitting using regularization
In the previous section, we established that a high weight magnitude is one of the reasons for over-fitting. In this section, we will look into ways to get around the problem of over-fitting, such as penalizing for high weight magnitude values.
Regularization gives a penalty for having a high magnitude of weights in model. L1 and L2 regularizations are among the most commonly used regularization techniques and work as follows:
L2 regularization minimizes the weighted sum of squares of weights at the specified layer of the neural network, in addition to minimizing the loss function (which is the sum of squared loss in the following formula):
Where is the weightage associated with the regularization term and is a hyperparameter that needs to be tuned, y is the predicted value of
, and
is the weight values across all the layers of the model.
L1 regularization minimizes the weighted sum of absolute values of weights at the specified layer of the neural network in addition to minimizing the loss function (which is the sum of the squared loss in the following formula):
.
This way, we ensure that weights do not get customized for extreme cases in the training dataset only (and thus, not generalizing on the test data).
- HTML5+CSS3王者歸來
- FFmpeg入門詳解:音視頻流媒體播放器原理及應用
- 云原生Spring實戰
- 常用工具軟件立體化教程(微課版)
- Advanced Express Web Application Development
- 軟件項目管理實用教程
- JSP程序設計與案例實戰(慕課版)
- IPython Interactive Computing and Visualization Cookbook
- LabVIEW入門與實戰開發100例(第4版)
- VMware vSphere 5.5 Cookbook
- Dart:Scalable Application Development
- Java核心技術速學版(第3版)
- Django 2.0 入門與實踐
- MySQL數據庫應用技術及實戰
- CISSP in 21 Days(Second Edition)