- Deep Learning with Keras
- Antonio Gulli Sujit Pal
- 74字
- 2021-07-02 23:58:05
Increasing the size of batch computation
Gradient descent tries to minimize the cost function on all the examples provided in the training sets and, at the same time, for all the features provided in the input. Stochastic gradient descent is a much less expensive variant, which considers only BATCH_SIZE examples. So, let's see what the behavior is by changing this parameter. As you can see, the optimal accuracy value is reached for BATCH_SIZE=128:

推薦閱讀
- Augmented Reality with Kinect
- 電腦常見(jiàn)故障現(xiàn)場(chǎng)處理
- 硬件產(chǎn)品經(jīng)理成長(zhǎng)手記(全彩)
- Learning Game Physics with Bullet Physics and OpenGL
- 基于Apache Kylin構(gòu)建大數(shù)據(jù)分析平臺(tái)
- SiFive 經(jīng)典RISC-V FE310微控制器原理與實(shí)踐
- Internet of Things Projects with ESP32
- 基于PROTEUS的電路設(shè)計(jì)、仿真與制板
- Istio服務(wù)網(wǎng)格技術(shù)解析與實(shí)踐
- WebGL Hotshot
- Blender 3D By Example
- Deep Learning with Keras
- 從企業(yè)級(jí)開(kāi)發(fā)到云原生微服務(wù):Spring Boot實(shí)戰(zhàn)
- 電腦主板維修技術(shù)
- 零基礎(chǔ)輕松學(xué)修電腦主板