- Deep Learning By Example
- Ahmed Menshawy
- 274字
- 2021-06-24 18:52:43
Apparent (training set) error
This the first type of error that you don't have to care about minimizing. Getting a small value for this type of error doesn't mean that your model will work well over the unseen data (generalize). To better understand this type of error, we'll give a trivial example of a class scenario. The purpose of solving problems in the classroom is not to be able to solve the same problem again in the exam, but to be able to solve other problems that won’t necessarily be similar to the ones you practiced in the classroom. The exam problems could be from the same family of the classroom problems, but not necessarily identical.
Apparent error is the ability of the trained model to perform on the training set for which we already know the true outcome/output. If you manage to get 0 error over the training set, then it is a good indicator for you that your model (mostly) won't work well on unseen data (won't generalize). On the other hand, data science is about using a training set as a base knowledge for the learning algorithm to work well on future unseen data.
In Figure 3, the red curve represents the apparent error. Whenever you increase the model's ability to memorize things (such as increasing the model complexity by increasing the number of explanatory features), you will find that this apparent error approaches zero. It can be shown that if you have as many features as observations/samples, then the apparent error will be zero:
- Mastering Proxmox(Third Edition)
- Circos Data Visualization How-to
- Verilog HDL數字系統設計入門與應用實例
- 數控銑削(加工中心)編程與加工
- 新手學電腦快速入門
- 系統安裝與重裝
- Hands-On Reactive Programming with Reactor
- Windows Server 2008 R2活動目錄內幕
- 大數據技術基礎:基于Hadoop與Spark
- Cloud Security Automation
- Salesforce Advanced Administrator Certification Guide
- 數據清洗
- JSP網絡開發入門與實踐
- Learning Couchbase
- 這樣用Word!