- Deep Learning Quick Reference
- Mike Bernico
- 234字
- 2021-06-24 18:40:07
Managing bias and variance in deep neural networks
Now that we've defined how we will structure data and refreshed ourselves on bias and variance, let's consider how we will control bias and variance errors in our deep neural networks.
- High bias: A network with high bias will have a very high error rate when predicting on the training set. The model is not doing well at fitting the data. In order to reduce the bias you will likely need to change the network architecture. You may need to add layers, neurons, or both. It may be that your problem is better solved using a convolutional or recurrent network.
Of course, sometimes a problem is high bias because of a lack of signal or very difficult problem, so be sure to calibrate your expectations on a reasonable rate (I like to start by calibrating on human accuracy).
- High variance: A network with a low bias error is fitting the training data well; however, if the validation error is greater than the test error the network has begun to overfit the training data. The two best ways to reduce variance are by adding data and adding regularization to the network.
Adding data is straightforward but not always possible. Throughout the book, we will cover regularization techniques as they apply. The most common regularization techniques we will talk about are L2 regularization, dropout, and batch normalization.
推薦閱讀
- Dreamweaver CS3 Ajax網(wǎng)頁設(shè)計入門與實例詳解
- AutoCAD繪圖實用速查通典
- 大學(xué)計算機基礎(chǔ):基礎(chǔ)理論篇
- Excel 2007函數(shù)與公式自學(xué)寶典
- 蕩胸生層云:C語言開發(fā)修行實錄
- 腦動力:PHP函數(shù)速查效率手冊
- Google App Inventor
- Windows 7寶典
- Linux服務(wù)與安全管理
- Python:Data Analytics and Visualization
- 網(wǎng)站入侵與腳本攻防修煉
- 強化學(xué)習(xí)
- Mastering Ansible(Second Edition)
- 筆記本電腦電路分析與故障診斷
- 數(shù)據(jù)清洗