- Machine Learning with Swift
- Alexander Sosnovshchenko
- 217字
- 2021-06-24 18:54:59
Precision, recall, and F1-score
To assess the quality of the algorithm considering the two types of error, accuracy metric is useless. That's why different metrics were proposed.
Precision and recall are metrics used to evaluate a prediction's quality in information retrieval and binary classification. Precision is a proportion of true positives among all predicted positives. It shows how relevant results are. Recall, also known as sensitivity, is a proportion of true positives among all truly positive samples. For example, if the task is to distinguish cat photos from non-cat photos, precision is a fraction of correctly predicted cats to all predicted cats. Recall is a fraction of predicted cats to the total number of true cats.
If we denote the number of true positive cases as Tp, and number of false positive cases as Fp, then precision P is calculated as:

Recall R is calculated as:

Where Fn is a number of false negative cases.
F1 measure is calculated as:

Now the same in Python:
In []: import numpy as np predictions = tree_model.predict(X_test) predictions = np.array(map(lambda x: x == 'rabbosaurus', predictions), dtype='int') true_labels = np.array(map(lambda x: x == 'rabbosaurus', y_test), dtype='int') from sklearn.metrics import precision_score, recall_score, f1_score precision_score(true_labels, predictions) Out[]: 0.87096774193548387 In []: recall_score(true_labels, predictions) Out[]: 0.88815789473684215 In []: f1_score(true_labels, predictions) Out[]: 0.87947882736156346
- Raspberry Pi 3 Cookbook for Python Programmers
- Deep Learning with PyTorch
- 計算機維修與維護技術(shù)速成
- 電腦軟硬件維修從入門到精通
- Machine Learning Solutions
- SiFive 經(jīng)典RISC-V FE310微控制器原理與實踐
- Arduino BLINK Blueprints
- 超大流量分布式系統(tǒng)架構(gòu)解決方案:人人都是架構(gòu)師2.0
- Internet of Things Projects with ESP32
- 龍芯自主可信計算及應(yīng)用
- Istio服務(wù)網(wǎng)格技術(shù)解析與實踐
- 電腦橫機使用與維修
- IP網(wǎng)絡(luò)視頻傳輸:技術(shù)、標準和應(yīng)用
- 微服務(wù)實戰(zhàn)
- ActionScript Graphing Cookbook