- Deep Learning with Theano
- Christopher Bourez
- 204字
- 2021-07-15 17:17:01
Dropout
Dropout is a widely used technique to improve convergence and robustness of a neural net and prevent neural nets from overfitting. It consists of setting some random values to zero for the layers on which we'd like it to apply. It introduces some randomness in the data at every epoch.
Usually, dropout is used before the fully connected layers and not used very often in convolutional layers. Let's add the following lines before each of our two fully connected layers:
dropout = 0.5 if dropout > 0 : mask = srng.binomial(n=1, p=1-dropout, size=hidden_input.shape) # The cast is important because # int * float32 = float64 which make execution slower hidden_input = hidden_input * T.cast(mask, theano.config.floatX)
The full script is in 5-cnn-with-dropout.py
. After 1,000 iterations, the validation error of the CNN with dropout continues to drops down to 1.08%, while the validation error of the CNN without dropout will not go down by 1.22%.
Readers who would like to go further with dropout should have a look at maxout units. They work well with dropout and replace the tanh non-linearities to get even better results. As dropout does a kind of model averaging, maxout units try to find the optimal non-linearity to the problem.
- 潮流:UI設計必修課
- Game Programming Using Qt Beginner's Guide
- Visual Basic程序設計教程
- 程序員面試算法寶典
- DevOps入門與實踐
- 碼上行動:零基礎學會Python編程(ChatGPT版)
- FFmpeg入門詳解:音視頻流媒體播放器原理及應用
- Mastering Scientific Computing with R
- 零基礎輕松學SQL Server 2016
- Scientific Computing with Scala
- R大數據分析實用指南
- Essential C++(中文版)
- 從零開始:UI圖標設計與制作(第3版)
- Troubleshooting Citrix XenApp?
- Mastering R for Quantitative Finance