- Statistics for Machine Learning
- Pratap Dangeti
- 163字
- 2021-07-02 19:05:57
Cross-validation
Cross-validation is another way of ensuring robustness in the model at the expense of computation. In the ordinary modeling methodology, a model is developed on train data and evaluated on test data. In some extreme cases, train and test might not have been homogeneously selected and some unseen extreme cases might appear in the test data, which will drag down the performance of the model.
On the other hand, in cross-validation methodology, data was divided into equal parts and training performed on all the other parts of the data except one part, on which performance will be evaluated. This process repeated as many parts user has chosen.
Example: In five-fold cross-validation, data will be divided into five parts, subsequently trained on four parts of the data, and tested on the one part of the data. This process will run five times, in order to cover all points in the data. Finally, the error calculated will be the average of all the errors:

- AngularJS入門與進階
- Boost C++ Application Development Cookbook(Second Edition)
- 零基礎(chǔ)學Scratch少兒編程:小學課本中的Scratch創(chuàng)意編程
- GitLab Repository Management
- 用Python實現(xiàn)深度學習框架
- Python數(shù)據(jù)分析從0到1
- Mastering Android Development with Kotlin
- RabbitMQ Essentials
- Java實戰(zhàn)(第2版)
- Windows Embedded CE 6.0程序設(shè)計實戰(zhàn)
- Android驅(qū)動開發(fā)權(quán)威指南
- 代替VBA!用Python輕松實現(xiàn)Excel編程
- Ubuntu Server Cookbook
- C++ Data Structures and Algorithm Design Principles
- Implementing NetScaler VPX?(Second Edition)