官术网_书友最值得收藏!

Gradient boosting

Gradient boosted trees are an ensemble of shallow trees (or weak learners). The shallow decision trees could be as small as a tree with just two leaves (also known as decision stump). The boosting methods help in reducing bias mainly but also help reduce variance slightly.

Original papers by Breiman and Friedman who developed the idea of gradient boosting are available at following links:

Intuitively, in the gradient boosting model, the decision trees in the ensemble are trained in several iterations as shown in the following image. A new decision tree is added at each iteration. Every additional decision tree is trained to improve the trained ensemble model in previous iterations. This is different from the random forest model where each decision tree is trained independently from the other decision trees in the ensemble.

The gradient boosting model has lesser number of trees as compared to the random forests model but ends up with a very large number of hyperparameters that need to be tuned to get a decent gradient boosting model.

An interesting explanation of gradient boosting can be found at the following link:  http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/.
主站蜘蛛池模板: 阳原县| 湘潭县| 错那县| 武强县| 于田县| 临海市| 巩留县| 西充县| 崇明县| 额尔古纳市| 桐庐县| 大方县| 安图县| 寻甸| 荔浦县| 元朗区| 天等县| 湾仔区| 星子县| 左贡县| 巴南区| 竹北市| 嘉善县| 平乐县| 吴川市| 兴业县| 兴文县| 保康县| 陇西县| 昌平区| 福清市| 文水县| 海盐县| 永清县| 定州市| 田东县| 无为县| 皋兰县| 兴业县| 阿鲁科尔沁旗| 天等县|