- Deep Learning with Keras
- Antonio Gulli Sujit Pal
- 179字
- 2021-07-02 23:58:06
Hyperparameters tuning
The preceding experiments gave a sense of what the opportunities for fine-tuning a net are. However, what is working for this example is not necessarily working for other examples. For a given net, there are indeed multiple parameters that can be optimized (such as the number of hidden neurons, BATCH_SIZE, number of epochs, and many more according to the complexity of the net itself).
Hyperparameter tuning is the process of finding the optimal combination of those parameters that minimize cost functions. The key idea is that if we have n parameters, then we can imagine that they define a space with n dimensions, and the goal is to find the point in this space which corresponds to an optimal value for the cost function. One way to achieve this goal is to create a grid in this space and systematically check for each grid vertex what the value assumed by the cost function is. In other words, the parameters are pided into buckets, and different combinations of values are checked via a brute force approach.
- Linux KVM虛擬化架構實戰指南
- 極簡Spring Cloud實戰
- 電腦常見問題與故障排除
- 辦公通信設備維修
- INSTANT Wijmo Widgets How-to
- 精選單片機設計與制作30例(第2版)
- 數字邏輯(第3版)
- 分布式系統與一致性
- Spring Cloud微服務架構實戰
- OpenGL Game Development By Example
- Blender Game Engine:Beginner's Guide
- Mastering Machine Learning on AWS
- The Artificial Intelligence Infrastructure Workshop
- Istio實戰指南
- FPGA實驗實訓教程