官术网_书友最值得收藏!

Stochastic gradient descent

Stochastic gradient descent is a variation of the gradient descent algorithm to train deep learning models. The basic idea is that instead of training the whole set of the data, a subset is utilized. Theoretically, one sample is good enough for training the network. But in practice, a fixed number of the input data or a batch is usually used. This approach results in faster training, as compared to the vanilla gradient descent.

主站蜘蛛池模板: 桂东县| 湟源县| 日照市| 平安县| 齐齐哈尔市| 汉中市| 灵石县| 白城市| 阳信县| 青州市| 安龙县| 宜君县| 滦平县| 兴安盟| 林芝县| 林芝县| 林芝县| 曲松县| 金堂县| 天全县| 东源县| 巴塘县| 南澳县| 资中县| 常德市| 清涧县| 灵台县| 西藏| 集安市| 泸溪县| 寿光市| 桐城市| 微博| 渝北区| 汶川县| 远安县| 永昌县| 彭州市| 临清市| 汪清县| 香格里拉县|