- Deep Learning with Theano
- Christopher Bourez
- 204字
- 2021-07-15 17:17:01
Dropout
Dropout is a widely used technique to improve convergence and robustness of a neural net and prevent neural nets from overfitting. It consists of setting some random values to zero for the layers on which we'd like it to apply. It introduces some randomness in the data at every epoch.
Usually, dropout is used before the fully connected layers and not used very often in convolutional layers. Let's add the following lines before each of our two fully connected layers:
dropout = 0.5 if dropout > 0 : mask = srng.binomial(n=1, p=1-dropout, size=hidden_input.shape) # The cast is important because # int * float32 = float64 which make execution slower hidden_input = hidden_input * T.cast(mask, theano.config.floatX)
The full script is in 5-cnn-with-dropout.py
. After 1,000 iterations, the validation error of the CNN with dropout continues to drops down to 1.08%, while the validation error of the CNN without dropout will not go down by 1.22%.
Readers who would like to go further with dropout should have a look at maxout units. They work well with dropout and replace the tanh non-linearities to get even better results. As dropout does a kind of model averaging, maxout units try to find the optimal non-linearity to the problem.
- 從零開始構建企業級RAG系統
- 黑客攻防從入門到精通(實戰秘笈版)
- Docker進階與實戰
- 兩周自制腳本語言
- Java面向對象思想與程序設計
- Java FX應用開發教程
- MongoDB,Express,Angular,and Node.js Fundamentals
- SSM開發實戰教程(Spring+Spring MVC+MyBatis)
- Learning YARN
- Solutions Architect's Handbook
- HTML5+CSS3+JavaScript 從入門到項目實踐(超值版)
- 軟件測試分析與實踐
- 例解Python:Python編程快速入門踐行指南
- Offer來了:Java面試核心知識點精講(框架篇)
- Three.js Essentials