- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 172字
- 2021-07-02 12:46:27
Getting ready
To understand the reason batch size has an impact on model accuracy, let's contrast two scenarios where the total dataset size is 60,000:
- Batch size is 30,000
- Batch size is 32
When the batch size is large, the number of times of weight update per epoch is small, when compared to the scenario when the batch size is small.
The reason for a high number of weight updates per epoch when the batch size is small is that less data points are considered to calculate the loss value. This results in more batches per epoch, as, loosely, in an epoch, you would have to go through all the training data points in a dataset.
Thus, the lower the batch size, the better the accuracy for the same number of epochs. However, while deciding the number of data points to be considered for a batch size, you should also ensure that the batch size is not too small so that it might overfit on top of a small batch of data.
- C語言程序設計(第3版)
- 單片機應用技術
- Python高效開發實戰:Django、Tornado、Flask、Twisted(第3版)
- 實戰Java高并發程序設計(第3版)
- 數據結構習題解析與實驗指導
- Mastering Android Game Development
- Learning Apache Karaf
- Java程序設計案例教程
- Angular應用程序開發指南
- Python大規模機器學習
- ASP.NET Core and Angular 2
- Google Maps JavaScript API Cookbook
- 一覽眾山小:ASP.NET Web開發修行實錄
- 絕密原型檔案:看看專業產品經理的原型是什么樣
- Web前端開發實戰教程(HTML5+CSS3+JavaScript)(微課版)