- Machine Learning Quick Reference
- Rahul Kumar
- 208字
- 2021-08-20 10:05:06
Regularization
We have now got a fair understanding of what overfitting means when it comes to machine learning modeling. Just to reiterate, when the model learns the noise that has crept into the data, it is trying to learn the patterns that take place due to random chance, and so overfitting occurs. Due to this phenomenon, the model's generalization runs into jeopardy and it performs poorly on unseen data. As a result of that, the accuracy of the model takes a nosedive.
Can we combat this kind of phenomenon? The answer is yes. Regularization comes to the rescue. Let's figure out what it can offer and how it works.
Regularization is a technique that enables the model to not become complex to avoid overfitting.
Let's take a look at the following regression equation:

The loss function for this is as follows:
The loss function would help in getting the coefficients adjusted and retrieving the optimal one. In the case of noise in the training data, the coefficients wouldn't generalize well and would run into overfitting. Regularization helps get rid of this by making these estimates or coefficients drop toward 0.
Now, we will cover two types of regularization. In later chapters, the other types will be covered.
- Ansible Configuration Management
- Mastering Proxmox(Third Edition)
- 輕松學Java Web開發
- iClone 4.31 3D Animation Beginner's Guide
- Spark大數據技術與應用
- 項目管理成功利器Project 2007全程解析
- 教育機器人的風口:全球發展現狀及趨勢
- 人工智能技術入門
- Windows 7故障與技巧200例
- Keras Reinforcement Learning Projects
- 多媒體技術應用教程
- Adobe Edge Quickstart Guide
- Practical Computer Vision
- Azure Serverless Computing Cookbook
- Pentaho Data Integration Beginner's Guide(Second Edition)