- Deep Learning with Keras
- Antonio Gulli Sujit Pal
- 74字
- 2021-07-02 23:58:05
Increasing the size of batch computation
Gradient descent tries to minimize the cost function on all the examples provided in the training sets and, at the same time, for all the features provided in the input. Stochastic gradient descent is a much less expensive variant, which considers only BATCH_SIZE examples. So, let's see what the behavior is by changing this parameter. As you can see, the optimal accuracy value is reached for BATCH_SIZE=128:

推薦閱讀
- ATmega16單片機項目驅動教程
- Mastering Adobe Photoshop Elements
- OUYA Game Development by Example
- 計算機組裝與維護(第3版)
- 筆記本電腦應用技巧
- 深入理解序列化與反序列化
- VMware Workstation:No Experience Necessary
- 微型計算機系統原理及應用:國產龍芯處理器的軟件和硬件集成(基礎篇)
- Python Machine Learning Blueprints
- 單片機原理與技能訓練
- Intel FPGA權威設計指南:基于Quartus Prime Pro 19集成開發環境
- Angular 6 by Example
- Learning Less.js
- The Reinforcement Learning Workshop
- Practical Artificial Intelligence and Blockchain