官术网_书友最值得收藏!

Training a logistic regression algorithm

Follow these simple steps to train a logistic regression algorithm:

  1. The first step is to make sure we load our packages and call the magrittr library into our environment:
> library(magrittr)
> install.packages("caret")
> install.packages("classifierplots")
> install.packages("earth")
> install.packages("Information")
> install.packages("InformationValue")
> install.packages("Metrics")
> install.packages("tidyverse")
  1. Here, we load the file then check the dimensions and examine a table of the customer labels:
> santander <- read.csv("~/santander_prepd.csv")

> dim(santander)
[1] 76020 143

> table(santander$y)

0 1
73012 3008

We have 76,020 observations, but only 3,008 customers are labeled 1, which means dissatisfied. I'm going to use caret next to create training and test sets with an 80/20 split.

  1. Within caret's createDataPartition() function, it automatically stratifies the sample based on the response, so we can rest assured about having a balanced percentage between the train and test sets:
> set.seed(1966)

> trainIndex <- caret::createDataPartition(santander$y, p = 0.8, list = FALSE)

> train <- santander[trainIndex, ]

> test <- santander[-trainIndex, ]
  1. Let's see how the response is balanced between the two datasets:
> table(train$y)

0 1
58411 2405

> table(test$y)

0 1
14601 603

There are roughly 4 percent in each set, so we can proceed. One interesting thing that can happen when you split the data is that you now end up with what was a near zero variance feature becoming a zero variance feature in your training set. When I treated this data, I only took out the zero variance features. 

  1. There were some low variance features, so let's see if we can eliminate some new zero variance ones:
> train_zero <- caret::nearZeroVar(train, saveMetrics = TRUE)

> table(train_zero$zeroVar)

FALSE TRUE
142 1
  1. OK, one feature is now zero variance because of the split, and we can remove it:
> train <- train[, train_zero$zeroVar == 'FALSE']

Our data frame now has 139 input features and the column of labeled customers. As we did with linear regression, for logistic regression to have meaningful results, which is to say not to overfit, you need to reduce the number of input features. We could press forward with stepwise selection or the like, as we did in the previous chapter. We could implement feature regularization methods as we'll discuss in the next chapter. However, I want to introduce a univariate feature reduction method using Weight Of Evidence (WOE) and Information Value (IV) and discuss how we can get an understanding of how to use it in a classification problem in conjunction with logistic regression.

主站蜘蛛池模板: 屏东县| 永仁县| 循化| 潢川县| 屏东县| 肇东市| 香格里拉县| 葫芦岛市| 文昌市| 凌海市| 洛阳市| 黄石市| 陆丰市| 木里| 出国| 万年县| 云浮市| 江口县| 阜城县| 融水| 宜章县| 沙田区| 营山县| 弥渡县| 微山县| 七台河市| 唐山市| 莎车县| 延川县| 蒲江县| 罗源县| 靖安县| 正定县| 韶关市| 夏河县| 新泰市| 特克斯县| 如东县| 崇信县| 吉林市| 南汇区|