官术网_书友最值得收藏!

Gradient boosting

Gradient boosted trees are an ensemble of shallow trees (or weak learners). The shallow decision trees could be as small as a tree with just two leaves (also known as decision stump). The boosting methods help in reducing bias mainly but also help reduce variance slightly.

Original papers by Breiman and Friedman who developed the idea of gradient boosting are available at following links:

Intuitively, in the gradient boosting model, the decision trees in the ensemble are trained in several iterations as shown in the following image. A new decision tree is added at each iteration. Every additional decision tree is trained to improve the trained ensemble model in previous iterations. This is different from the random forest model where each decision tree is trained independently from the other decision trees in the ensemble.

The gradient boosting model has lesser number of trees as compared to the random forests model but ends up with a very large number of hyperparameters that need to be tuned to get a decent gradient boosting model.

An interesting explanation of gradient boosting can be found at the following link:  http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/.
主站蜘蛛池模板: 明光市| 荆门市| 师宗县| 柯坪县| 平武县| 文水县| 象州县| 德惠市| 遂川县| 金沙县| 平泉县| 万载县| 福贡县| 甘南县| 汤原县| 嘉祥县| 库尔勒市| 贞丰县| 甘德县| 疏附县| 铜川市| 台中市| 五家渠市| 高雄县| 隆德县| 高雄县| 社会| 鸡东县| 永新县| 广昌县| 民权县| 顺昌县| 平乡县| 台东县| 荆州市| 普陀区| 堆龙德庆县| 禄劝| 太原市| 惠州市| 江城|