- Advanced Machine Learning with R
- Cory Lesmeister Dr. Sunil Kumar Chinnamgari
- 376字
- 2021-06-24 14:24:37
Weight of evidence and information value
I stumbled into this method several years ago during consulting work. The team I was on was really into big datasets and constrained to using SAS statistical software. It was also a critical requirement that the customer teams could easily interpret the models.
Given the possibility of hundreds, even thousands, of possible features, I was privileged enough to learn the use of WOE and IV by a former rocket scientist. That's right: a person who actually worked on manned space flight. I became an eager pupil. Now, this method isn't a panacea. First of all, it's univariate, so features that are thrown out can become significant in a multivariate model and vice versa. I can say that it provides a nice complement to other methods, and you should keep it in your modeling toolbox. I believe it had its origins in the world of credit scoring, so if you work in the financial industry, you may already be familiar with it.
First, let's look at the formula for WOE:

The WOE serves as a component in the IV. For numeric features, you would bin your data then calculate WOE separately for each bin. For categorical ones, or when one-hot encoded, bin for each level and calculate the WOE separately. Let's take an example and demonstrate in R.
Our data consists of one input feature coded as 0 or 1, so we'll have just two bins. For each bin, we calculate our WOE. In bin 1, or where values are equal to 0, there are four observations as events and 96 as non-events. Conversely, in bin 2, or where values are equal to 1, we have 12 observations as events and 88 as non-events. Let's see how to calculate the WOE for each bin:
> bin1events <- 4
> bin1nonEvents <- 96
> bin2events <- 12
> bin2nonEvents <- 88
> totalEvents <- bin1events + bin2events
> totalNonEvents <- bin1nonEvents + bin2nonEvents
# Now calculate the percentage per bin
> bin1percentE <- bin1events / totalEvents
> bin1percentNE <- bin1nonEvents / totalNonEvents
> bin2percentE <- bin2events / totalEvents
> bin2percentNE <- bin2nonEvents / totalNonEvents
# It's now possible to produce WOE
> bin1WOE <- log(bin1percentE / bin1percentNE)
> bin2WOE <- log(bin2percentE / bin2percentNE)
With completing this, you end up with the WOE for bin1 and bin2 of roughly -0.74 and 0.45 respectively. We now use that to calculate the IV per bin, then sum that up to arrive at an overall IV for the feature. The formula is as follows:

Taking our current example; this is our feature IV:
> bin1IV <- (bin1percentE - bin1percentNE) * bin1WOE
> bin2IV <- (bin2percentE - bin2percentNE) * bin2WOE
> bin1IV + bin2IV
[1] 0.3221803
The IV for the feature is 0.322. Now, what does that mean? The short answer is that it depends. There's a heuristic provided to help decide what IV threshold makes sense for inclusion in model development:
- < 0.02 not predictive
- 0.02 to 0.1 weak
- 0.1 to 0.3 medium
- 0.3 to 0.5 strong
- > 0.5 suspicious
Our following example will provide us with interesting decisions to make regarding where to draw the line.
- Arduino入門基礎教程
- 新型電腦主板關鍵電路維修圖冊
- SDL Game Development
- 計算機組裝·維護與故障排除
- 施耐德SoMachine控制器應用及編程指南
- 精選單片機設計與制作30例(第2版)
- 電腦組裝、維護、維修全能一本通(全彩版)
- 3ds Max Speed Modeling for 3D Artists
- 平衡掌控者:游戲數值經濟設計
- Apple Motion 5 Cookbook
- Visual Media Processing Using Matlab Beginner's Guide
- 數字媒體專業英語(第2版)
- Mastering Quantum Computing with IBM QX
- FPGA實戰訓練精粹
- The Deep Learning with PyTorch Workshop