- Deep Learning Essentials
- Wei Di Anurag Bhardwaj Jianing Wei
- 139字
- 2021-06-30 19:17:55
Optimization algorithms
Optimization is the key to how a network learns. Learning is basically an optimization process. It refers to the process that minimizes the error, cost, or finds the locus of least errors. It then adjusts the network coefficients step by step. A very basic optimization approach is the one we used in the previous section on gradient descents. However, there are multiple variations that do a similar job but with a bit of improvement added. TensorFlow provides multiple options for you to choose as the optimizer, for example, GradientDescentOptimizer, AdagradOptimizer, MomentumOptimizer, AdamOptimizer, FtrlOptimizer, and RMSPropOptimizer. For the API and how to use them, please see this page:
https://www.tensorflow.org/versions/master/api_docs/python/tf/train#optimizers.
These optimizers should be sufficient for most deep learning techniques. If you aren’t sure which one to use, use GradientDescentOptimizer as a starting point.
- ArchiCAD 19:The Definitive Guide
- Cinema 4D R13 Cookbook
- 程序設計缺陷分析與實踐
- Cloud Analytics with Microsoft Azure
- 計算機網絡安全
- PostgreSQL 10 Administration Cookbook
- 手機游戲程序開發
- Unity Multiplayer Games
- 自動化生產線安裝與調試(三菱FX系列)(第二版)
- 電子設備及系統人機工程設計(第2版)
- MPC5554/5553微處理器揭秘
- C#求職寶典
- 人工智能:智能人機交互
- Windows 7故障與技巧200例
- Advanced Deep Learning with Keras