- Reinforcement Learning with TensorFlow
- Sayon Dutta
- 127字
- 2021-08-27 18:51:56
Limitations of deep learning
Deep neural networks are black boxes of weights and biases trained over a large amount of data to find hidden patterns through inner representations; it would be impossible for humans, and even if it were possible, then scalability would be an issue. Every neural probably has a different weight. Thus, they will have different gradients.
Training happens during backpropagation. Thus, the direction of training is always from the later layers (output/right side) to the early layers (input/left side). This results in later layers learning very well as compared to the early layers. The deeper the network gets, the more the condition deteriorates. This give rise to two possible problems associated with deep learning, which are:
- The vanishing gradient problem
- The exploding gradient problem
推薦閱讀
- Splunk 7 Essentials(Third Edition)
- STM32G4入門與電機控制實戰:基于X-CUBE-MCSDK的無刷直流電機與永磁同步電機控制實現
- Associations and Correlations
- CompTIA Linux+ Certification Guide
- 構建高性能Web站點
- Enterprise PowerShell Scripting Bootcamp
- DevOps Bootcamp
- 計算機組成與操作系統
- 智慧未來
- 工業機器人入門實用教程
- 計算機應用基礎實訓·職業模塊
- 基于元胞自動機的人群疏散系統建模與分析
- SQL語言與數據庫操作技術大全
- Learn SOLIDWORKS 2020
- 深度學習實戰