- Ensemble Machine Learning Cookbook
- Dipayan Sarkar Vijayalakshmi Natarajan
- 255字
- 2021-07-02 13:21:56
How it works...
VotingClassifier implements two types of voting—hard and soft voting. In hard voting, the final class label is predicted as the class label that has been predicted most frequently by the classification models. In other words, the predictions from all classifiers are aggregated to predict the class that gets the most votes. In simple terms, it takes the mode of the predicted class labels.
In hard voting for the class labels, is the prediction based on the majority voting of each classifier
, where i=1.....n observations, we have the following:

As shown in the previous section, we have three models, one from the decision tree, one from the SVMs, and one from logistic regression. Let's say that the models classify a training observation as class 1, class 0, and class 1 respectively. Then with majority voting, we have the following:

In this case, we would classify the observation as class 1.
In the preceding section, in Step 1, we imported the required libraries to build our models. In Step 2, we created our feature set. We also split our data to create the training and testing samples. In Step 3, we trained three models with the decision tree, SVMs, and logistic regression respectively. In Step 4, we looked at the accuracy score of each of the base learners, while in Step 5, we ensembled the models using VotingClassifier() and looked at the accuracy score of the ensemble model.
- 現代測控系統典型應用實例
- 機器學習與大數據技術
- 機器自動化控制器原理與應用
- 自動化控制工程設計
- 可編程控制器技術應用(西門子S7系列)
- 永磁同步電動機變頻調速系統及其控制(第2版)
- Ceph:Designing and Implementing Scalable Storage Systems
- 具比例時滯遞歸神經網絡的穩定性及其仿真與應用
- FPGA/CPLD應用技術(Verilog語言版)
- 單片機技能與實訓
- INSTANT Adobe Story Starter
- JRuby語言實戰技術
- Unreal Development Kit Game Design Cookbook
- Data Analysis with R(Second Edition)
- PVCBOT零基礎機器人制作(第2版)