- Generative Adversarial Networks Projects
- Kailash Ahirwar
- 210字
- 2021-07-02 13:38:54
Hyperparameter optimization
The model that we trained might not be a perfect model, but we can optimize the hyperparameters to improve it. There are many hyperparameters in a 3D-GAN that can be optimized. These include the following:
- Batch size: Experiment with values of 8, 16, 32, 54, or 128 for the batch size.
- The number of epochs: Experiment with 100 epochs and gradually increase it to 1,000-5,000.
- Learning rate: This is the most important hyperparameter. Experiment with 0.1, 0.001, 0.0001, and other small learning rates.
- Activation functions in different layers of the generator and the discriminator network: Experiment with sigmoid, tanh, ReLU, LeakyReLU, ELU, SeLU, and other activation functions.
- The optimization algorithm: Experiment with Adam, SGD, Adadelta, RMSProp, and other optimizers available in the Keras framework.
- Loss functions: Binary cross entropy is the loss function best suited for a 3D-GAN.
- The number of layers in both of the networks: Experiment with different numbers in the network depending on the amount of training data available. You can make your network deep if you have enough data available to train it with.
We can also carry out automatic hyperparameter optimization by using libraries such as Hyperopt (https://github.com/hyperopt/hyperopt) or Hyperas (https://github.com/maxpumperla/hyperas) to select the best set of hyperparameters.
推薦閱讀
- Introduction to DevOps with Kubernetes
- Windows程序設計與架構
- Hybrid Cloud for Architects
- 構建高性能Web站點
- Android游戲開發案例與關鍵技術
- Enterprise PowerShell Scripting Bootcamp
- 工業機器人運動仿真編程實踐:基于Android和OpenGL
- FPGA/CPLD應用技術(Verilog語言版)
- Mastering pfSense
- 手機游戲策劃設計
- 數據要素:全球經濟社會發展的新動力
- 計算機組裝與維修實訓
- 互聯網單元測試及實踐
- Linux Administration Cookbook
- 谷物干燥節能供熱技術與裝備