- Reinforcement Learning with TensorFlow
- Sayon Dutta
- 145字
- 2021-08-27 18:51:56
The vanishing gradient problem
The vanishing gradient problem is one of the problems associated with the training of artificial neural networks when the neurons present in the early layers are not able to learn because the gradients that train the weights shrink down to zero. This happens due to the greater depth of neural network, along with activation functions with derivatives resulting in low value.
Try the following steps:
- Create one hidden layer neural network
- Add more hidden layers, one by one
We observe the gradient with regards to all the nodes, and find that the gradient values get relatively smaller when we move from the later layers to the early layers. This condition worsens with the further addition of layers. This shows that the early layer neurons are learning slowly compared to the later layer neurons. This condition is called the vanishing gradient problem.
- 電氣自動化專業(yè)英語(第3版)
- OpenStack for Architects
- WOW!Illustrator CS6完全自學寶典
- 數(shù)據(jù)運營之路:掘金數(shù)據(jù)化時代
- 群體智能與數(shù)據(jù)挖掘
- 現(xiàn)代機械運動控制技術
- Mastering Machine Learning Algorithms
- Visual C++編程全能詞典
- 數(shù)據(jù)通信與計算機網絡
- Troubleshooting OpenVPN
- 分數(shù)階系統(tǒng)分析與控制研究
- Photoshop CS5圖像處理入門、進階與提高
- Photoshop CS4數(shù)碼攝影處理50例
- 青少年VEX IQ機器人實訓課程(初級)
- Cloudera Hadoop大數(shù)據(jù)平臺實戰(zhàn)指南