- Reinforcement Learning with TensorFlow
- Sayon Dutta
- 127字
- 2021-08-27 18:51:56
Limitations of deep learning
Deep neural networks are black boxes of weights and biases trained over a large amount of data to find hidden patterns through inner representations; it would be impossible for humans, and even if it were possible, then scalability would be an issue. Every neural probably has a different weight. Thus, they will have different gradients.
Training happens during backpropagation. Thus, the direction of training is always from the later layers (output/right side) to the early layers (input/left side). This results in later layers learning very well as compared to the early layers. The deeper the network gets, the more the condition deteriorates. This give rise to two possible problems associated with deep learning, which are:
- The vanishing gradient problem
- The exploding gradient problem
推薦閱讀
- Java編程全能詞典
- 構(gòu)建高質(zhì)量的C#代碼
- 21天學(xué)通PHP
- 教父母學(xué)會(huì)上網(wǎng)
- Windows 8應(yīng)用開發(fā)實(shí)戰(zhàn)
- 可編程序控制器應(yīng)用實(shí)訓(xùn)(三菱機(jī)型)
- 人工智能實(shí)踐錄
- 突破,Objective-C開發(fā)速學(xué)手冊
- 基于神經(jīng)網(wǎng)絡(luò)的監(jiān)督和半監(jiān)督學(xué)習(xí)方法與遙感圖像智能解譯
- ASP.NET 2.0 Web開發(fā)入門指南
- 企業(yè)級Web開發(fā)實(shí)戰(zhàn)
- Windows 7來了
- 渲染王3ds Max三維特效動(dòng)畫技術(shù)
- JSP網(wǎng)絡(luò)開發(fā)入門與實(shí)踐
- ARM嵌入式系統(tǒng)開發(fā)完全入門與主流實(shí)踐