官术网_书友最值得收藏!

  • Deep Learning Essentials
  • Wei Di Anurag Bhardwaj Jianing Wei
  • 139字
  • 2021-06-30 19:17:55

Optimization algorithms

Optimization is the key to how a network learns. Learning is basically an optimization process. It refers to the process that minimizes the error, cost, or finds the locus of least errors. It then adjusts the network coefficients step by step. A very basic optimization approach is the one we used in the previous section on gradient descents. However, there are multiple variations that do a similar job but with a bit of improvement added. TensorFlow provides multiple options for you to choose as the optimizer, for example, GradientDescentOptimizer, AdagradOptimizer, MomentumOptimizer, AdamOptimizer, FtrlOptimizer, and RMSPropOptimizer. For the API and how to use them, please see this page:

https://www.tensorflow.org/versions/master/api_docs/python/tf/train#optimizers.

These optimizers should be sufficient for most deep learning techniques. If you aren’t sure which one to use, use GradientDescentOptimizer as a starting point.

主站蜘蛛池模板: 浠水县| 格尔木市| 浪卡子县| 名山县| 陇川县| 册亨县| 陕西省| 福清市| 清苑县| 类乌齐县| 桃江县| 沈丘县| 宜城市| 惠安县| 德钦县| 宣化县| 乐昌市| 包头市| 黄山市| 巴楚县| 通城县| 久治县| 宜丰县| 永和县| 舞阳县| 金川县| 明水县| 临泽县| 葫芦岛市| 遂溪县| 玉环县| 汝州市| 阳东县| 黄大仙区| 巴东县| 吉首市| 龙里县| 勐海县| 昌都县| 大邑县| 安阳县|