官术网_书友最值得收藏!

Optimization algorithms

Optimization is the key to how a network learns. Learning is basically an optimization process. It refers to the process that minimizes the error, cost, or finds the locus of least errors. It then adjusts the network coefficients step by step. A very basic optimization approach is the one we used in the previous section on gradient descents. However, there are multiple variations that do a similar job but with a bit of improvement added. TensorFlow provides multiple options for you to choose as the optimizer, for example, GradientDescentOptimizer, AdagradOptimizer, MomentumOptimizer, AdamOptimizer, FtrlOptimizer, and RMSPropOptimizer. For the API and how to use them, please see this page:

https://www.tensorflow.org/versions/master/api_docs/python/tf/train#optimizers.

These optimizers should be sufficient for most deep learning techniques. If you aren’t sure which one to use, use GradientDescentOptimizer as a starting point.

主站蜘蛛池模板: 珲春市| 那坡县| 胶州市| 晋中市| 阿荣旗| 三亚市| 阿坝| 兴安盟| 黄龙县| 建湖县| 金坛市| 湘潭县| 罗源县| 怀远县| 安庆市| 正定县| 兴海县| 资中县| 繁昌县| 新龙县| 遂平县| 张家港市| 凌海市| 沙河市| 庆云县| 芜湖县| 会东县| 临猗县| 岑巩县| 霍邱县| 海晏县| 望奎县| 凯里市| 万载县| 鄂托克前旗| 龙胜| 静宁县| 盐亭县| 四川省| 古浪县| 巴彦淖尔市|