官术网_书友最值得收藏!

Bias versus variance trade-off

Every model has both bias and variance error components in addition to white noise. Bias and variance are inversely related to each other; while trying to reduce one component, the other component of the model will increase. The true art lies in creating a good fit by balancing both. The ideal model will have both low bias and low variance.

Errors from the bias component come from erroneous assumptions in the underlying learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs; this phenomenon causes an underfitting problem.

On the other hand, errors from the variance component come from sensitivity to change in the fit of the model, even a small change in training data; high variance can cause an overfitting problem:


An example of a high bias model is logistic or linear regression, in which the fit of the model is merely a straight line and may have a high error component due to the fact that a linear model could not approximate underlying data well.

An example of a high variance model is a decision tree, in which the model may create too much wiggly curve as a fit, in which even a small change in training data will cause a drastic change in the fit of the curve.

At the moment, state-of-the-art models are utilizing high variance models such as decision trees and performing ensemble on top of them to reduce the errors caused by high variance and at the same time not compromising on increases in errors due to the bias component. The best example of this category is random forest, in which many decision trees will be grown independently and ensemble in order to come up with the best fit; we will cover this in upcoming chapters:

主站蜘蛛池模板: 洛隆县| 建瓯市| 阿合奇县| 五河县| 读书| 梅河口市| 聊城市| 聂荣县| 尼玛县| 珠海市| 兴仁县| 乳山市| 平武县| 上杭县| 科技| 泰安市| 五寨县| 宜城市| 于田县| 招远市| 修文县| 永清县| 广丰县| 徐汇区| 建宁县| 玉山县| 昆明市| 台中县| 璧山县| 江川县| 广汉市| 贵港市| 天峨县| 吴桥县| 墨玉县| 孟州市| 曲沃县| 仁布县| 勐海县| 齐河县| 宕昌县|