官术网_书友最值得收藏!

Using regularization to overcome overfitting

In the previous chapter, we saw the diminishing returns from further training iterations on neural networks in terms of their predictive ability on holdout or test data (that is, data not used to train the model). This is because complex models may memorize some of the noise in the data rather than learning the general patterns. These models then perform much worse when predicting new data. There are some methods we can apply to make our model generalize, that is, fit the overall patterns. These are called regularization and aim to reduce testing errors so that the model performs well on new data.

The most common regularization technique used in deep learning is dropout. However, we will also discuss two other regularization techniques that have a basis in regression and deep learning. These two regularization techniques are L1 penalty, which is also known as Lasso, and L2 penalty, which is also known as Ridge.

主站蜘蛛池模板: 会昌县| 林甸县| 宁强县| 云和县| 长丰县| 平安县| 八宿县| 察雅县| 铁岭县| 东光县| 安多县| 赣州市| 苗栗县| 南召县| 牙克石市| 陵川县| 文昌市| 白水县| 瑞昌市| 正宁县| 泗洪县| 会宁县| 邢台县| 于都县| 安远县| 从化市| 山丹县| 玉溪市| 浏阳市| 龙山县| 富民县| 洛南县| 曲沃县| 杨浦区| 华安县| 宁都县| 巴林左旗| 松阳县| 且末县| 德庆县| 德清县|