- Natural Language Processing with Java and LingPipe Cookbook
- Breck Baldwin Krishna Dayanidhi
- 430字
- 2021-08-05 17:12:50
Understanding precision and recall
The false positive from the preceding recipe is one of the four possible error categories. All the categories and their interpretations are as follows:
- For a given category X:
- True positive: The classifier guessed X, and the true category is X
- False positive: The classifier guessed X, but the true category is a category that is different from X
- True negative: The classifier guessed a category that is different from X, and the true category is different from X
- False negative: The classifier guessed a category different from X, but the true category is X
With these definitions in hand, we can define the additional common evaluation metrics as follows:
- Precision for a category X is true positive / (false positive + true positive)
- The degenerate case is to make one very confident guess for 100 percent precision. This minimizes the false positives but will have a horrible recall.
- Recall or sensitivity for a category X is true positive / (false negative + true positive)
- The degenerate case is to guess all the data as belonging to category X for 100 percent recall. This minimizes false negatives but will have horrible precision.
- Specificity for a category X is true negative / (true negative + false positive)
- The degenerate case is to guess that all data is not in category X.
The degenerate cases are provided to make clear what the metric is focused on. There are metrics such as f-measure that balance precision and recall, but even then, there is no inclusion of true negatives, which can be highly informative. See the Javadoc at com.aliasi.classify.PrecisionRecallEvaluation
for more details on evaluation.
- In our experience, most business needs map to one of the three scenarios:
- High precision / high recall: The language ID needs to have both good coverage and good accuracy; otherwise, lots of stuff will go wrong. Fortunately, for distinct languages where a mistake will be costly (such as Japanese versus English or English versus Spanish), the LM classifiers perform quite well.
- High precision / usable recall: Most business use cases have this shape. For example, a search engine that automatically changes a query if it is misspelled better not make lots of mistakes. This means it looks pretty bad to change "Breck Baldwin" to "Brad Baldwin", but no one really notices if "Bradd Baldwin" is not corrected.
- High recall / usable precision: Intelligence analysis looking for a particular needle in a haystack will tolerate a lot of false positives in support of finding the intended target. This was an early lesson from our DARPA days.
推薦閱讀
- 嵌入式軟件系統測試:基于形式化方法的自動化測試解決方案
- Mobile Web Performance Optimization
- Oracle 11g從入門到精通(第2版) (軟件開發視頻大講堂)
- PostgreSQL Cookbook
- HTML5 移動Web開發從入門到精通(微課精編版)
- 編寫高質量代碼:改善Python程序的91個建議
- 精通網絡視頻核心開發技術
- SAP BusinessObjects Dashboards 4.1 Cookbook
- 軟件測試教程
- Python語言科研繪圖與學術圖表繪制從入門到精通
- Learning iOS Security
- Android開發進階實戰:拓展與提升
- Mastering Unity Scripting
- 零基礎入門學習C語言:帶你學C帶你飛
- Go語言Hyperledger區塊鏈開發實戰