官术网_书友最值得收藏!

Using regularization to overcome overfitting

In the previous chapter, we saw the diminishing returns from further training iterations on neural networks in terms of their predictive ability on holdout or test data (that is, data not used to train the model). This is because complex models may memorize some of the noise in the data rather than learning the general patterns. These models then perform much worse when predicting new data. There are some methods we can apply to make our model generalize, that is, fit the overall patterns. These are called regularization and aim to reduce testing errors so that the model performs well on new data.

The most common regularization technique used in deep learning is dropout. However, we will also discuss two other regularization techniques that have a basis in regression and deep learning. These two regularization techniques are L1 penalty, which is also known as Lasso, and L2 penalty, which is also known as Ridge.

主站蜘蛛池模板: 民权县| 蕲春县| 丹阳市| 黔西| 三穗县| 安阳市| 宝坻区| 平原县| 凤翔县| 石家庄市| 陆河县| 安西县| 海淀区| 大宁县| 尉氏县| 保靖县| 湘阴县| 贵定县| 临洮县| 江孜县| 河池市| 鄢陵县| 隆回县| 揭东县| 营山县| 四会市| 武夷山市| 项城市| 连江县| 沿河| 若尔盖县| 金川县| 金门县| 邢台市| 湘西| 乐平市| 前郭尔| 五华县| 故城县| 鹤山市| 台东市|