官术网_书友最值得收藏!

MAP learning

When selecting the right hypothesis, a Bayesian approach is normally one of the best choices, because it takes into account all the factors and, as we'll see, even if it's based on conditional independence, such an approach works perfectly when some factors are partially dependent. However, its complexity (in terms of probabilities) can easily grow because all terms must always be taken into account. For example, a real coin is a very short cylinder, so, in tossing a coin, we should also consider the probability of even. Let's say, it's 0.001. It means that we have three possible outcomes: P(head) = P(tail) = (1.0 - 0.001) / 2.0 and P(even) = 0.001. The latter event is obviously unlikely, but in Bayesian learning it must be considered (even if it'll be squeezed by the strength of the other terms).

An alternative is picking the most probable hypothesis in terms of a posteriori probability:

This approach is called MAP (maximum a posteriori) and it can really simplify the scenario when some hypotheses are quite unlikely (for example, in tossing a coin, a MAP hypothesis will discard P(even)). However, it still does have an important drawback: it depends on Apriori probabilities (remember that maximizing the a posteriori implies considering also the Apriori). As Russel and Norvig (Russel S., Norvig P., Artificial Intelligence: A Modern Approach, Pearson) pointed out, this is often a delicate part of an inferential process, because there's always a theoretical background which can drive to a particular choice and exclude others. In order to rely only on data, it's necessary to have a different approach.

主站蜘蛛池模板: 清涧县| 潼关县| 若羌县| 武平县| 鄱阳县| 平邑县| 深圳市| 滨海县| 资兴市| 宁陵县| 宜城市| 利辛县| 彰武县| 临漳县| 景泰县| 平舆县| 彭阳县| 政和县| 苗栗县| 安阳县| 玛曲县| 新沂市| 遵义县| 鄂托克旗| 谢通门县| 万源市| 和顺县| 日照市| 察哈| 绍兴县| 汉中市| 玉溪市| 金平| 义乌市| 东乡县| 南乐县| 丰城市| 阳谷县| 巫山县| 昌都县| 静海县|