- Reinforcement Learning with TensorFlow
- Sayon Dutta
- 145字
- 2021-08-27 18:51:56
The vanishing gradient problem
The vanishing gradient problem is one of the problems associated with the training of artificial neural networks when the neurons present in the early layers are not able to learn because the gradients that train the weights shrink down to zero. This happens due to the greater depth of neural network, along with activation functions with derivatives resulting in low value.
Try the following steps:
- Create one hidden layer neural network
- Add more hidden layers, one by one
We observe the gradient with regards to all the nodes, and find that the gradient values get relatively smaller when we move from the later layers to the early layers. This condition worsens with the further addition of layers. This shows that the early layer neurons are learning slowly compared to the later layer neurons. This condition is called the vanishing gradient problem.
- Introduction to DevOps with Kubernetes
- 高性能混合信號ARM:ADuC7xxx原理與應用開發(fā)
- 走入IBM小型機世界
- Natural Language Processing Fundamentals
- 樂高機器人EV3設(shè)計指南:創(chuàng)造者的搭建邏輯
- Google App Inventor
- 精通Excel VBA
- 基于32位ColdFire構(gòu)建嵌入式系統(tǒng)
- 電腦上網(wǎng)輕松入門
- Visual C++項目開發(fā)案例精粹
- 強化學習
- 三菱FX/Q系列PLC工程實例詳解
- 筆記本電腦使用與維護
- Generative Adversarial Networks Projects
- 工業(yè)機器人應用系統(tǒng)三維建模