- Deep Learning By Example
- Ahmed Menshawy
- 216字
- 2021-06-24 18:52:43
Generalization/true error
This is the second and more important type of error in data science. The whole purpose of building learning systems is the ability to get a smaller generalization error on the test set; in other words, to get the model to work well on a set of observation/samples that haven't been used in the training phase. If you still consider the class scenario from the previous section, you can think of generalization error as the ability to solve exam problems that weren’t necessarily similar to the problems you solved in the classroom to learn and get familiar with the subject. So, generalization performance is the model's ability to use the skills (parameters) that it learned in the training phase in order to correctly predict the outcome/output of unseen data.
In Figure 13, the light blue line represents the generalization error. You can see that as you increase the model complexity, the generalization error will be reduced, until some point when the model will start to lose its increasing power and the generalization error will decrease. This part of the curve where you get the generalization error to lose its increasing generalization power, is called overfitting.
The takeaway message from this section is to minimize the generalization error as much as you can.
- 機器學習與大數據技術
- 中國戰略性新興產業研究與發展·智能制造
- 21天學通Java Web開發
- 工業機器人操作與編程
- 我也能做CTO之程序員職業規劃
- Blender 3D Printing by Example
- Docker on Amazon Web Services
- Machine Learning with the Elastic Stack
- 基于Xilinx ISE的FPAG/CPLD設計與應用
- 基于企業網站的顧客感知服務質量評價理論模型與實證研究
- Hands-On Dashboard Development with QlikView
- 一步步寫嵌入式操作系統
- Cortex-M3嵌入式處理器原理與應用
- Windows 7來了
- Generative Adversarial Networks Projects