- Mastering Predictive Analytics with scikit:learn and TensorFlow
- Alan Fontaine
- 232字
- 2021-07-23 16:42:23
Ensemble methods and their working
Ensemble methods are based on a very simple idea: instead of using a single model to make a prediction, we use many models and then use some method to aggregate the predictions. Having different models is like having different points of view, and it has been demonstrated that by aggregating models that offer a different point of view; predictions can be more accurate. These methods further improve generalization over a single model because they reduce the risk of selecting a poorly performing classifier:

In the preceding diagram, we can see that each object belongs to one of three classes: triangles, circles, and squares. In this simplified example, we have two features to separate or classify the objects into the different classes. As you can see here, we can use three different classifiers and all the three classifiers represent different approaches and have different kinds of decision boundaries.
Ensemble learning combines all those individual predictions into a single one. The predictions made from combining the three boundaries usually have better properties than the ones produced by the individual models. This is the simple idea behind ensemble methods, also called ensemble learning.
The most commonly used ensemble methods are as follows:
- Bootstrap sampling
- Bagging
- Random forests
- Boosting
Before giving a high-level explanation of these methods, we need to discuss a very important statistical technique known as bootstrap sampling.
- 后稀缺:自動化與未來工作
- Mastercam 2017數控加工自動編程經典實例(第4版)
- ETL with Azure Cookbook
- 傳感器技術實驗教程
- 西門子S7-200 SMART PLC從入門到精通
- ROS機器人編程與SLAM算法解析指南
- AutoCAD 2012中文版繪圖設計高手速成
- Docker High Performance(Second Edition)
- 貫通Java Web開發(fā)三劍客
- 精通LabVIEW程序設計
- Photoshop CS5圖像處理入門、進階與提高
- Mastering Ansible(Second Edition)
- 計算機應用基礎實訓·職業(yè)模塊
- 案例解說Delphi典型控制應用
- 玩機器人 學單片機