- Deep Learning Quick Reference
- Mike Bernico
- 180字
- 2021-06-24 18:40:11
Tuning the model hyperparameters
Now that we've trained an MLP and a six-layer deep neural network on the problem, we're ready to tune and optimize model hyperparameters.
We will discuss model tuning in depth in Chapter 6, Hyperparameter Optimization. There are a variety of strategies that you can use to choose the best parameters for your model. As you've probably noticed, there are many possible parameters and hyperparameters that we could still optimize.
If you wanted to fully tune this model you should do the following:
- Experiment with the number of hidden layers. It appears that five might be too many, and one might not be enough.
- Experiment with the number of neurons in each hidden layer, relative to the number of layers.
- Experiment with adding dropout or regularization.
- Attempt to further reduce model error by trying SGD or RMS prop instead of Adam, or by using a different learning rate for Adam.
Deep neural networks have so many moving parts, getting to optimal is sometimes an exhausting notion. You'll have to decide whether your model is good enough.
推薦閱讀
- 玩轉智能機器人程小奔
- 面向STEM的mBlock智能機器人創新課程
- Linux Mint System Administrator’s Beginner's Guide
- 走入IBM小型機世界
- 蕩胸生層云:C語言開發修行實錄
- Dreamweaver CS3網頁設計50例
- 工業機器人工程應用虛擬仿真教程:MotoSim EG-VRC
- 模型制作
- 流處理器研究與設計
- Data Wrangling with Python
- WordPress Theme Development Beginner's Guide(Third Edition)
- 中國戰略性新興產業研究與發展·工業機器人
- Applied Data Visualization with R and ggplot2
- Learning Apache Apex
- 工業機器人實操進階手冊