- Deep Learning with Keras
- Antonio Gulli Sujit Pal
- 179字
- 2021-07-02 23:58:06
Hyperparameters tuning
The preceding experiments gave a sense of what the opportunities for fine-tuning a net are. However, what is working for this example is not necessarily working for other examples. For a given net, there are indeed multiple parameters that can be optimized (such as the number of hidden neurons, BATCH_SIZE, number of epochs, and many more according to the complexity of the net itself).
Hyperparameter tuning is the process of finding the optimal combination of those parameters that minimize cost functions. The key idea is that if we have n parameters, then we can imagine that they define a space with n dimensions, and the goal is to find the point in this space which corresponds to an optimal value for the cost function. One way to achieve this goal is to create a grid in this space and systematically check for each grid vertex what the value assumed by the cost function is. In other words, the parameters are pided into buckets, and different combinations of values are checked via a brute force approach.
- 24小時學會電腦組裝與維護
- 新媒體跨界交互設計
- Learning AngularJS Animations
- Mastering Delphi Programming:A Complete Reference Guide
- 深入淺出SSD:固態存儲核心技術、原理與實戰
- 計算機維修與維護技術速成
- scikit-learn:Machine Learning Simplified
- 計算機組裝維修與外設配置(高等職業院校教改示范教材·計算機系列)
- Hands-On Artificial Intelligence for Banking
- LPC1100系列處理器原理及應用
- 電腦橫機使用與維修
- STM32自學筆記
- FreeSWITCH Cookbook
- Mastering Machine Learning on AWS
- 嵌入式系統原理及應用:基于ARM Cortex-M4體系結構