官术网_书友最值得收藏!

Stochastic gradient descent

Stochastic gradient descent is a variation of the gradient descent algorithm to train deep learning models. The basic idea is that instead of training the whole set of the data, a subset is utilized. Theoretically, one sample is good enough for training the network. But in practice, a fixed number of the input data or a batch is usually used. This approach results in faster training, as compared to the vanilla gradient descent.

主站蜘蛛池模板: 南康市| 印江| 梓潼县| 阿尔山市| 余江县| 天津市| 包头市| 启东市| 杭锦旗| 葵青区| 同仁县| 临沭县| 仙游县| 棋牌| 南华县| 梓潼县| 邯郸市| 毕节市| 宣城市| 满城县| 东兰县| 兴国县| 兴安县| 深州市| 镇雄县| 青海省| 冷水江市| 武威市| 镇坪县| 翁牛特旗| 商南县| 周至县| 武邑县| 邳州市| 松阳县| 石首市| 蒙山县| 佛山市| 东莞市| 龙川县| 通山县|