- Statistics for Machine Learning
- Pratap Dangeti
- 163字
- 2021-07-02 19:05:57
Cross-validation
Cross-validation is another way of ensuring robustness in the model at the expense of computation. In the ordinary modeling methodology, a model is developed on train data and evaluated on test data. In some extreme cases, train and test might not have been homogeneously selected and some unseen extreme cases might appear in the test data, which will drag down the performance of the model.
On the other hand, in cross-validation methodology, data was divided into equal parts and training performed on all the other parts of the data except one part, on which performance will be evaluated. This process repeated as many parts user has chosen.
Example: In five-fold cross-validation, data will be divided into five parts, subsequently trained on four parts of the data, and tested on the one part of the data. This process will run five times, in order to cover all points in the data. Finally, the error calculated will be the average of all the errors:

- Learning Selenium Testing Tools with Python
- HoloLens Beginner's Guide
- PyTorch自然語言處理入門與實戰
- Cassandra Design Patterns(Second Edition)
- Full-Stack Vue.js 2 and Laravel 5
- Java Web開發技術教程
- Building Minecraft Server Modifications
- 詳解MATLAB圖形繪制技術
- Odoo 10 Implementation Cookbook
- App Inventor 2 Essentials
- C++程序設計習題與實驗指導
- Building Web Applications with Flask
- 大學計算機基礎
- 編寫高質量代碼:改善JavaScript程序的188個建議
- Machine Learning for the Web