- Deep Learning Quick Reference
- Mike Bernico
- 163字
- 2021-06-24 18:40:16
What happens if we use too many neurons?
If we make our network architecture too complicated, two things will happen:
- We're likely to develop a high variance model
- The model will train slower than a less complicated model
If we add many layers, our gradients will get smaller and smaller until the first few layers barely train, which is called the vanishing gradient problem. We're nowhere near that yet, but we will talk about it later.
In (almost) the words of rap legend Christopher Wallace, aka Notorious B.I.G., the more neurons we come across, the more problems we see. With that said, the variance can be managed with dropout, regularization, and early stopping, and advances in GPU computing make deeper networks possible.
If I had to pick between a network with too many neurons or too few, and I only got to try one experiment, I'd prefer to err on the side of slightly too many.
推薦閱讀
- Deep Learning Quick Reference
- Spark編程基礎(chǔ)(Scala版)
- Hands-On Machine Learning on Google Cloud Platform
- PyTorch深度學(xué)習(xí)實戰(zhàn)
- RPA(機器人流程自動化)快速入門:基于Blue Prism
- 工業(yè)控制系統(tǒng)測試與評價技術(shù)
- Ruby on Rails敏捷開發(fā)最佳實踐
- HTML5 Canvas Cookbook
- 學(xué)練一本通:51單片機應(yīng)用技術(shù)
- Hands-On Dashboard Development with QlikView
- 電腦上網(wǎng)入門
- Mastering MongoDB 4.x
- Flash 8中文版全程自學(xué)手冊
- 工業(yè)控制系統(tǒng)安全
- 微控制器的選擇與應(yīng)用