官术网_书友最值得收藏!

Corresponding machine learning algorithm – the Naive Bayes Classifier

In the preceding example, we showed you how to calculate a post-test probability given a pretest probability, a likelihood, and a test result. The machine learning algorithm known as the Naive Bayes Classifier does this for every feature sequentially for a given observation. For example, in the preceding example, the post-test probability was 14.3%. Let's pretend that the patient now has a troponin drawn and it is elevated. 14.3% now becomes the pretest probability, and a new post-test probability is calculated based on the contingency table for troponin and MI, where the contingency tables are obtained from the training data. This is continued until all the features are exhausted. Again, the key assumption is that each feature is independent of all others. For the classifier, the category (outcome) having the highest post-test probability is assigned to the observation.

The Naive Bayes Classifier is popular for a select group of applications. Its advantages include high interpretability, robustness to missing data, and ease/speed for training and predicting. However, its assumptions make the model unable to compete with more state-of-the-art algorithms.

主站蜘蛛池模板: 进贤县| 沁源县| 龙川县| 喀喇| 张家口市| 永吉县| 保定市| 调兵山市| 石嘴山市| 嘉峪关市| 平乐县| 南乐县| 阿拉善盟| 津南区| 万源市| 禹州市| 行唐县| 武山县| 大埔县| 利津县| 阜城县| 昆明市| 平原县| 沂源县| 广西| 兰溪市| 丹凤县| 盐山县| 永川市| 离岛区| 介休市| 临江市| 乐平市| 临清市| 西华县| 辰溪县| 永善县| 内丘县| 綦江县| 双桥区| 兴业县|