- Deep Learning By Example
- Ahmed Menshawy
- 274字
- 2021-06-24 18:52:43
Apparent (training set) error
This the first type of error that you don't have to care about minimizing. Getting a small value for this type of error doesn't mean that your model will work well over the unseen data (generalize). To better understand this type of error, we'll give a trivial example of a class scenario. The purpose of solving problems in the classroom is not to be able to solve the same problem again in the exam, but to be able to solve other problems that won’t necessarily be similar to the ones you practiced in the classroom. The exam problems could be from the same family of the classroom problems, but not necessarily identical.
Apparent error is the ability of the trained model to perform on the training set for which we already know the true outcome/output. If you manage to get 0 error over the training set, then it is a good indicator for you that your model (mostly) won't work well on unseen data (won't generalize). On the other hand, data science is about using a training set as a base knowledge for the learning algorithm to work well on future unseen data.
In Figure 3, the red curve represents the apparent error. Whenever you increase the model's ability to memorize things (such as increasing the model complexity by increasing the number of explanatory features), you will find that this apparent error approaches zero. It can be shown that if you have as many features as observations/samples, then the apparent error will be zero:
- Mastering Mesos
- Div+CSS 3.0網頁布局案例精粹
- 網絡服務器架設(Windows Server+Linux Server)
- 返璞歸真:UNIX技術內幕
- 讓每張照片都成為佳作的Photoshop后期技法
- Creo Parametric 1.0中文版從入門到精通
- AWS Certified SysOps Administrator:Associate Guide
- 大型數據庫管理系統技術、應用與實例分析:SQL Server 2005
- DevOps:Continuous Delivery,Integration,and Deployment with DevOps
- 我也能做CTO之程序員職業規劃
- Excel 2007技巧大全
- Mastering GitLab 12
- 經典Java EE企業應用實戰
- Mastering Ceph
- 嵌入式Linux系統實用開發