- Effective Amazon Machine Learning
- Alexis Perrier
- 706字
- 2021-07-03 00:17:51
Imbalanced datasets
Dealing with imbalanced datasets is a very common classification problem.
Consider a Binary classification problem. Your goal is to predict a positive versus a negative class. The ratio between the two classes is highly skewed in favor of the positive class. This situation is frequently encountered in the following instance:
- In a medical context where the positive class corresponds to the presence of cancerous cells in people out of a large random population
- In a marketing context where the positive class corresponds to prospects buying an insurance while the majority of people are not buying it
In both these cases, we want to detect the samples in the minority class, but they are overwhelmingly outnumbered by the samples in the majority (negative) class. Most predictive models will be highly biased toward the majority class.
In the presence of highly imbalanced classes, a very simplistic model that always predicts the majority class and never the minority one will have excellent accuracy but would never detect the important and valuable class. Consider for instance a dataset composed of 1,000 samples, with 50 positive samples that we want to detect or predict and 950 negative ones of little interest. That simplistic model has an accuracy rate of 95% which is obviously a decent accuracy even though that model is totally useless. This problem is known as the Accuracy paradox (https://en.wikipedia.org/wiki/Accuracy_paradox).
A straightforward solution would be to gather more data, with a focus on collecting samples of the minority class and in order to balance out the two classes. But that's not always a possibility.
There are many other strategies to deal with imbalanced datasets. We will briefly look at some of the most common ones. One approach is to resample the data by under sampling or oversampling the available data:
- Undersampling consists in discarding most samples in the majority class in order to tilt back the minority/majority class ratio toward 50/50. The obvious problem with that strategy is that a lot of data is discarded and along with that, meaningful signal for the model. This technique can be useful in the presence of large enough datasets.
- Oversampling consists in duplicating samples that belong to the minority class. Contrary to under sampling, there is no loss of data with that strategy. However, oversampling adds extra weight to certain patterns from the minority class, which may not bring useful information to the model. Oversampling adds noise to the model. Oversampling is useful when the dataset is small and you can't afford to leave some data out.
Under sampling and oversampling are two simple and easy-to-implement methods that are useful in establishing a baseline. Another widely-used method consists in creating synthetic samples from the existing data. A popular sample creation technique is the SMOTE method, which stands for Synthetic Minority Over-Sampling Technique. SMOTE works by selecting similar samples (with respect to some distance measure) from the minority class and adding perturbations on the selected attributes. SMOTE then creates new minority samples within clusters of existing minority samples. SMOTE is less of a solution in the presence of high-dimensional datasets.
The imbalanced library in Python (http://github.com/scikit-learn-contrib/imbalanced-learn) or the unbalanced package in R (https://cran.r-project.org/web/packages/unbalanced/index.html) both offer a large set of advanced techniques on top of the ones mentioned.
Note that the choice of the metric used to assess the performances of the model is particularly important in the context of an imbalanced dataset. The accuracy rate, which is defined as the ratio of correctly predicted samples to the total number of samples is the most straightforward metric in classification problems. But as we have seen, this accuracy rate is not a good indicator of the model's predictive power in the presence of a highly skewed class distribution.
In such a context, two metrics are recommended:
- Cohen's kappa: A robust measure of the agreement between real and predicted classes (https://en.wikipedia.org/wiki/Cohen%27s_kappa)
- The F1 score: The harmonic mean between Precision and Recall (https://en.wikipedia.org/wiki/F1_score)
The F1 score is the metric used by Amazon ML to assess the quality of a classification model. We give the definition of the F1-score under the Evaluating the performance of your model section at the end of this chapter.
- Python數據挖掘:入門、進階與實用案例分析
- 企業大數據系統構建實戰:技術、架構、實施與應用
- 揭秘云計算與大數據
- 深度剖析Hadoop HDFS
- OracleDBA實戰攻略:運維管理、診斷優化、高可用與最佳實踐
- 數據中心數字孿生應用實踐
- INSTANT Apple iBooks How-to
- 新手學會計(2013-2014實戰升級版)
- Mastering LOB Development for Silverlight 5:A Case Study in Action
- PostgreSQL高可用實戰
- MySQL數據庫應用與管理
- 深入理解Flink:實時大數據處理實踐
- Learn Selenium
- Applying Math with Python
- 一本書讀懂區塊鏈(第2版)