官术网_书友最值得收藏!

MAP learning

When selecting the right hypothesis, a Bayesian approach is normally one of the best choices, because it takes into account all the factors and, as we'll see, even if it's based on conditional independence, such an approach works perfectly when some factors are partially dependent. However, its complexity (in terms of probabilities) can easily grow because all terms must always be taken into account. For example, a real coin is a very short cylinder, so, in tossing a coin, we should also consider the probability of even. Let's say, it's 0.001. It means that we have three possible outcomes: P(head) = P(tail) = (1.0 - 0.001) / 2.0 and P(even) = 0.001. The latter event is obviously unlikely, but in Bayesian learning it must be considered (even if it'll be squeezed by the strength of the other terms).

An alternative is picking the most probable hypothesis in terms of a posteriori probability:

This approach is called MAP (maximum a posteriori) and it can really simplify the scenario when some hypotheses are quite unlikely (for example, in tossing a coin, a MAP hypothesis will discard P(even)). However, it still does have an important drawback: it depends on Apriori probabilities (remember that maximizing the a posteriori implies considering also the Apriori). As Russel and Norvig (Russel S., Norvig P., Artificial Intelligence: A Modern Approach, Pearson) pointed out, this is often a delicate part of an inferential process, because there's always a theoretical background which can drive to a particular choice and exclude others. In order to rely only on data, it's necessary to have a different approach.

主站蜘蛛池模板: 昭觉县| 长春市| 嵊州市| 合水县| 安达市| 云安县| 龙胜| 屯门区| 丽江市| 江永县| 濮阳县| 丰镇市| 西青区| 林芝县| 丰城市| 宁国市| 清苑县| 沧州市| 湖南省| 四会市| 苏尼特右旗| 漠河县| 祁阳县| 大连市| 肇庆市| 屏山县| 沧州市| 靖安县| 花垣县| 中方县| 浏阳市| 宣威市| 鞍山市| 华宁县| 越西县| 南澳县| 红原县| 山阳县| 青海省| 扎兰屯市| 乐安县|