官术网_书友最值得收藏!

Gradient boosting

Gradient boosted trees are an ensemble of shallow trees (or weak learners). The shallow decision trees could be as small as a tree with just two leaves (also known as decision stump). The boosting methods help in reducing bias mainly but also help reduce variance slightly.

Original papers by Breiman and Friedman who developed the idea of gradient boosting are available at following links:

Intuitively, in the gradient boosting model, the decision trees in the ensemble are trained in several iterations as shown in the following image. A new decision tree is added at each iteration. Every additional decision tree is trained to improve the trained ensemble model in previous iterations. This is different from the random forest model where each decision tree is trained independently from the other decision trees in the ensemble.

The gradient boosting model has lesser number of trees as compared to the random forests model but ends up with a very large number of hyperparameters that need to be tuned to get a decent gradient boosting model.

An interesting explanation of gradient boosting can be found at the following link:  http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/.
主站蜘蛛池模板: 什邡市| 军事| 吉首市| 奉新县| 左权县| 大田县| 荥经县| 卓资县| 宜阳县| 开阳县| 桓仁| 宁河县| 山西省| 正镶白旗| 铜鼓县| 肇庆市| 贺州市| 西宁市| 东宁县| 高要市| 东港市| 修水县| 宽甸| 宜春市| 花莲县| 保山市| 九龙县| 福鼎市| 前郭尔| 蕲春县| 武邑县| 临高县| 万全县| 乌鲁木齐县| 定西市| 三明市| 溆浦县| 隆林| 精河县| 太仓市| 安陆市|