- Deep Learning Essentials
- Wei Di Anurag Bhardwaj Jianing Wei
- 139字
- 2021-06-30 19:17:55
Optimization algorithms
Optimization is the key to how a network learns. Learning is basically an optimization process. It refers to the process that minimizes the error, cost, or finds the locus of least errors. It then adjusts the network coefficients step by step. A very basic optimization approach is the one we used in the previous section on gradient descents. However, there are multiple variations that do a similar job but with a bit of improvement added. TensorFlow provides multiple options for you to choose as the optimizer, for example, GradientDescentOptimizer, AdagradOptimizer, MomentumOptimizer, AdamOptimizer, FtrlOptimizer, and RMSPropOptimizer. For the API and how to use them, please see this page:
https://www.tensorflow.org/versions/master/api_docs/python/tf/train#optimizers.
These optimizers should be sufficient for most deep learning techniques. If you aren’t sure which one to use, use GradientDescentOptimizer as a starting point.
- Practical Ansible 2
- Seven NoSQL Databases in a Week
- 空間機器人遙操作系統及控制
- 21天學通C#
- CompTIA Network+ Certification Guide
- 菜鳥起飛系統安裝與重裝
- Salesforce Advanced Administrator Certification Guide
- 智能鼠原理與制作(進階篇)
- Mastering Text Mining with R
- 數字多媒體技術基礎
- 基于Proteus的PIC單片機C語言程序設計與仿真
- 自適應學習:人工智能時代的教育革命
- 百度智能小程序:AI賦能新機遇
- 工業機器人與自控系統的集成應用
- 人工智能產品經理:從零開始玩轉AI產品