官术网_书友最值得收藏!

Imbalanced datasets

Dealing with imbalanced datasets is a very common classification problem.

Consider a Binary classification problem. Your goal is to predict a positive versus a negative class. The ratio between the two classes is highly skewed in favor of the positive class. This situation is frequently encountered in the following instance:

  • In a medical context where the positive class corresponds to the presence of cancerous cells in people out of a large random population
  • In a marketing context where the positive class corresponds to prospects buying an insurance while the majority of people are not buying it

In both these cases, we want to detect the samples in the minority class, but they are overwhelmingly outnumbered by the samples in the majority (negative) class. Most predictive models will be highly biased toward the majority class.

In the presence of highly imbalanced classes, a very simplistic model that always predicts the majority class and never the minority one will have excellent accuracy but would never detect the important and valuable class. Consider for instance a dataset composed of 1,000 samples, with 50 positive samples that we want to detect or predict and 950 negative ones of little interest. That simplistic model has an accuracy rate of 95% which is obviously a decent accuracy even though that model is totally useless. This problem is known as the Accuracy paradox (https://en.wikipedia.org/wiki/Accuracy_paradox).

A straightforward solution would be to gather more data, with a focus on collecting samples of the minority class and in order to balance out the two classes. But that's not always a possibility.

There are many other strategies to deal with imbalanced datasets. We will briefly look at some of the most common ones. One approach is to resample the data by under sampling or oversampling the available data:

  • Undersampling consists in discarding most samples in the majority class in order to tilt back the minority/majority class ratio toward 50/50. The obvious problem with that strategy is that a lot of data is discarded and along with that, meaningful signal for the model. This technique can be useful in the presence of large enough datasets.
  • Oversampling consists in duplicating samples that belong to the minority class. Contrary to under sampling, there is no loss of data with that strategy. However, oversampling adds extra weight to certain patterns from the minority class, which may not bring useful information to the model. Oversampling adds noise to the model. Oversampling is useful when the dataset is small and you can't afford to leave some data out.

Under sampling and oversampling are two simple and easy-to-implement methods that are useful in establishing a baseline. Another widely-used method consists in creating synthetic samples from the existing data. A popular sample creation technique is the SMOTE method, which stands for Synthetic Minority Over-Sampling Technique. SMOTE works by selecting similar samples (with respect to some distance measure) from the minority class and adding perturbations on the selected attributes. SMOTE then creates new minority samples within clusters of existing minority samples. SMOTE is less of a solution in the presence of high-dimensional datasets.

The imbalanced library in Python (http://github.com/scikit-learn-contrib/imbalanced-learn) or the unbalanced package in R (https://cran.r-project.org/web/packages/unbalanced/index.html) both offer a large set of advanced techniques on top of the ones mentioned.

Note that the choice of the metric used to assess the performances of the model is particularly important in the context of an imbalanced dataset. The accuracy rate, which is defined as the ratio of correctly predicted samples to the total number of samples is the most straightforward metric in classification problems. But as we have seen, this accuracy rate is not a good indicator of the model's predictive power in the presence of a highly skewed class distribution.

In such a context, two metrics are recommended:

The F1 score is the metric used by Amazon ML to assess the quality of a classification model. We give the definition of the F1-score under the Evaluating the performance of your model section at the end of this chapter.

主站蜘蛛池模板: 顺平县| 光山县| 监利县| 文昌市| 和静县| 克什克腾旗| 涪陵区| 彰化县| 庄河市| 利辛县| 龙州县| 南溪县| 稻城县| 江川县| 山丹县| 彭阳县| 张北县| 汉沽区| 平罗县| 宁蒗| 和硕县| 友谊县| 商都县| 图片| 定日县| 北流市| 连城县| 高邮市| 洛隆县| 莱州市| 平泉县| 阳西县| 普兰店市| 安岳县| 霍林郭勒市| 楚雄市| 宾川县| 扎兰屯市| 许昌市| 彝良县| 邢台县|