官术网_书友最值得收藏!

Training a logistic regression algorithm

Follow these simple steps to train a logistic regression algorithm:

  1. The first step is to make sure we load our packages and call the magrittr library into our environment:
> library(magrittr)
> install.packages("caret")
> install.packages("classifierplots")
> install.packages("earth")
> install.packages("Information")
> install.packages("InformationValue")
> install.packages("Metrics")
> install.packages("tidyverse")
  1. Here, we load the file then check the dimensions and examine a table of the customer labels:
> santander <- read.csv("~/santander_prepd.csv")

> dim(santander)
[1] 76020 143

> table(santander$y)

0 1
73012 3008

We have 76,020 observations, but only 3,008 customers are labeled 1, which means dissatisfied. I'm going to use caret next to create training and test sets with an 80/20 split.

  1. Within caret's createDataPartition() function, it automatically stratifies the sample based on the response, so we can rest assured about having a balanced percentage between the train and test sets:
> set.seed(1966)

> trainIndex <- caret::createDataPartition(santander$y, p = 0.8, list = FALSE)

> train <- santander[trainIndex, ]

> test <- santander[-trainIndex, ]
  1. Let's see how the response is balanced between the two datasets:
> table(train$y)

0 1
58411 2405

> table(test$y)

0 1
14601 603

There are roughly 4 percent in each set, so we can proceed. One interesting thing that can happen when you split the data is that you now end up with what was a near zero variance feature becoming a zero variance feature in your training set. When I treated this data, I only took out the zero variance features. 

  1. There were some low variance features, so let's see if we can eliminate some new zero variance ones:
> train_zero <- caret::nearZeroVar(train, saveMetrics = TRUE)

> table(train_zero$zeroVar)

FALSE TRUE
142 1
  1. OK, one feature is now zero variance because of the split, and we can remove it:
> train <- train[, train_zero$zeroVar == 'FALSE']

Our data frame now has 139 input features and the column of labeled customers. As we did with linear regression, for logistic regression to have meaningful results, which is to say not to overfit, you need to reduce the number of input features. We could press forward with stepwise selection or the like, as we did in the previous chapter. We could implement feature regularization methods as we'll discuss in the next chapter. However, I want to introduce a univariate feature reduction method using Weight Of Evidence (WOE) and Information Value (IV) and discuss how we can get an understanding of how to use it in a classification problem in conjunction with logistic regression.

主站蜘蛛池模板: 喀喇| 阳山县| 内乡县| 宁陵县| 宁化县| 乐亭县| 城口县| 登封市| 喀什市| 新乡市| 六枝特区| 兖州市| 清徐县| 饶平县| 白水县| 武乡县| 利津县| 二手房| 霍邱县| 开封县| 桂林市| 基隆市| 三门县| 临高县| 同德县| 三亚市| 从江县| 蓬安县| 江孜县| 呼和浩特市| 五华县| 普安县| 全州县| 安阳县| 武强县| 高雄市| 昭觉县| 简阳市| 乳山市| 镇雄县| 行唐县|