官术网_书友最值得收藏!

Bias versus variance trade-off

Every model has both bias and variance error components in addition to white noise. Bias and variance are inversely related to each other; while trying to reduce one component, the other component of the model will increase. The true art lies in creating a good fit by balancing both. The ideal model will have both low bias and low variance.

Errors from the bias component come from erroneous assumptions in the underlying learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs; this phenomenon causes an underfitting problem.

On the other hand, errors from the variance component come from sensitivity to change in the fit of the model, even a small change in training data; high variance can cause an overfitting problem:


An example of a high bias model is logistic or linear regression, in which the fit of the model is merely a straight line and may have a high error component due to the fact that a linear model could not approximate underlying data well.

An example of a high variance model is a decision tree, in which the model may create too much wiggly curve as a fit, in which even a small change in training data will cause a drastic change in the fit of the curve.

At the moment, state-of-the-art models are utilizing high variance models such as decision trees and performing ensemble on top of them to reduce the errors caused by high variance and at the same time not compromising on increases in errors due to the bias component. The best example of this category is random forest, in which many decision trees will be grown independently and ensemble in order to come up with the best fit; we will cover this in upcoming chapters:

主站蜘蛛池模板: 平湖市| 景德镇市| 滨州市| 东乡族自治县| 马山县| 四川省| 喜德县| 江津市| 墨竹工卡县| 宜春市| 茶陵县| 青河县| 山丹县| 永州市| 宝清县| 沙河市| 马龙县| 颍上县| 方正县| 新疆| 博白县| 津市市| 四平市| 色达县| 平原县| 怀仁县| 镇安县| 寿光市| 隆子县| 班玛县| 平潭县| 丹凤县| 凌海市| 大英县| 甘泉县| 健康| 乐陵市| 巫山县| 师宗县| 岳池县| 理塘县|