- Mastering Predictive Analytics with R(Second Edition)
- James D. Miller Rui Miguel Forte
- 1162字
- 2021-07-02 20:25:18
Multiple linear regression
Whenever we have more than one input feature and want to build a linear regression model, we are in the realm of multiple linear regression. The general equation for a multiple linear regression model with k input features is:

Our assumptions about the model and about the error component ε remain the same as with simple linear regression, remembering that, as we now have more than one input feature, we assume that these are independent of each other. Instead of using simulated data to demonstrate multiple linear regression, we will analyze two real-world datasets.
Predicting CPU performance
Our first real-world dataset was presented by the researchers Dennis F. Kibler, David W. Aha, and Marc K. Albert in a 1989 paper entitled Instance-based prediction of real-valued attributes and published in the Journal of Computational Intelligence. The data contains the characteristics of different CPU models, such as the cycle time and the amount of cache memory. When deciding between processors, we would like to take all of these things into account, but ideally, we'd like to compare processors on a single numerical scale. For this reason, we often develop programs to benchmark the relative performance of a CPU. Our dataset also comes with the published relative performance of our CPUs and our objective will be to use the available CPU characteristics as features to predict this. The dataset can be obtained online from the UCI Machine Learning Repository via this link: http://archive.ics.uci.edu/ml/datasets/Computer+Hardware.
The UCI Machine Learning Repository is a wonderful online resource that hosts a large number of datasets, many of which are often cited by authors of books and tutorials. It is well worth the effort to familiarize yourself with this website and its datasets. A very good way to learn predictive analytics is to practice using the techniques you learn in this book on different datasets, and the UCI repository provides many of these for exactly this purpose.
The machine.data
file contains all our data in a comma-separated format, with one line per CPU model. We'll import this in R and label all the columns. Note that there are 10 columns in total, but we don't need the first two for our analysis, as these are just the brand and model name of the CPU. Similarly, the final column is a predicted estimate of the relative performance that was produced by the researchers themselves; our actual output variable, PRP, is in column 9. We'll store the data that we need in a data frame called machine
:
> machine <- read.csv("machine.data", header = F) > names(machine) <- c("VENDOR", "MODEL", "MYCT", "MMIN", "MMAX", "CACH", "CHMIN", "CHMAX", "PRP", "ERP") > machine <- machine[, 3:9] > head(machine, n = 3) MYCT MMIN MMAX CACH CHMIN CHMAX PRP 1 125 256 6000 256 16 128 198 2 29 8000 32000 32 8 32 269 3 29 8000 32000 32 8 32 220
The dataset also comes with definitions of the data columns:

The dataset contains no missing values, so no observations need to be removed or modified. One thing that we'll notice is that we only have roughly 200 data points, which is generally considered a very small sample. Nonetheless, we will proceed with splitting our data into a training set and a test set, with an 85-15 split, as follows:
> library(caret) > set.seed(4352345) > machine_sampling_vector <- createDataPartition(machine$PRP, p = 0.85, list = FALSE) > machine_train <- machine[machine_sampling_vector,] > machine_train_features <- machine[, 1:6] > machine_train_labels <- machine$PRP[machine_sampling_vector] > machine_test <- machine[-machine_sampling_vector,] > machine_test_labels <- machine$PRP[-machine_sampling_vector]
Now that we have our data set up-and-running, we'll usually want to investigate further and check whether some of our assumptions for linear regression are valid. For example, we would like to know whether we have any highly correlated features. To do this, we can construct a correlation matrix with the cor()
function and use the findCorrelation()
function from the caret
package to get suggestions for which features to remove:
> machine_correlations <- cor(machine_train_features) > findCorrelation(machine_correlations) integer(0) > findCorrelation(machine_correlations, cutoff = 0.75) [1] 3 > cor(machine_train$MMIN, machine_train$MMAX) [1] 0.7679307
Using the default cutoff of 0.9
for a high degree of correlation, we found that none of our features should be removed. When we reduce this cutoff to 0.75
, we see that caret
recommends that we remove the third feature (MMAX). As the final line of the preceding code shows, the degree of correlation between this feature and MMIN is 0.768
. While the value is not very high, it is still high enough to cause us a certain degree of concern that this will affect our model. Intuitively, of course, if we look at the definitions of our input features, we will certainly tend to expect that a model with a relatively high value for the minimum main memory will also be likely to have a relatively high value for the maximum main memory. Linear regression can sometimes still give us a good model with correlated variables, but we would expect to get better results if our variables were uncorrelated. For now, we've decided to keep all our features for this dataset.
Predicting the price of used cars
Our second dataset is in the cars data frame included in the caret
package and was collected by Shonda Kuiper in 2008 from the Kelly Blue Book website, www.kbb.com. This is an online resource to obtain reliable prices for used cars. The dataset comprises 804 GM cars, all with the model year 2005. It includes a number of car attributes, such as the mileage and engine size as well as the suggested selling price. Many features are binary indicator variables, such as the Buick feature, which represents whether a particular car's make is Buick. The cars were all in excellent condition and less than one year old when priced, so the car condition is not included as a feature. Our objective for this dataset is to build a model that will predict the selling price of a car using the values of these attributes. The definitions of the features are as follows:
As with the machine dataset, we should investigate the correlation between the input features:
> library(caret) > data(cars) > cars_cor <- cor(cars_train_features) > findCorrelation(cars_cor) integer(0) > findCorrelation(cars_cor, cutoff = 0.75) [1] 3 > cor(cars$Doors,cars$coupe) [1] -0.8254435 > table(cars$coupe,cars$Doors) 2 4 0 50 614 1 140 0
Just as with the machine dataset, we have a correlation that shows up when we set cutoff
to 0.75
in the findCorrelation()
function of caret
. By directly examining the correlation matrix, we found that there is a relatively high degree of correlation between the Doors
feature and the coupe
feature. By cross-tabulating these two, we can see why this is the case. If we know that the type of a car is a coupe, then the number of doors is always two. If the car is not a coupe, then it most likely has four doors.
Another problematic aspect of the cars data is that some features are exact linear combinations of other features. This is discovered using the findLinearCombos()
function in the caret package:
> findLinearCombos(cars) $linearCombos $linearCombos[[1]] [1] 15 4 8 9 10 11 12 13 14 $linearCombos[[2]] [1] 18 4 8 9 10 11 12 13 16 17 $remove [1] 15 18
Here, we are advised to drop the coupe
and wagon
columns, which are the 15th and 18th features, respectively, because they are exact linear combinations of other features. We will remove both of these from our data frame, thus also eliminating the correlation problem we saw previously.
Next, we'll split our data into training and test sets:
> cars <- cars[,c(-15, -18)] > set.seed(232455) > cars_sampling_vector <- createDataPartition(cars$Price, p = 0.85, list = FALSE) > cars_train <- cars[cars_sampling_vector,] > cars_train_features <- cars[,-1] > cars_train_labels <- cars$Price[cars_sampling_vector] > cars_test <- cars[-cars_sampling_vector,] > cars_test_labels <- cars$Price[-cars_sampling_vector]
Now that we have our data ready, we'll build some models.
- Web前端開發技術:HTML、CSS、JavaScript(第3版)
- Android NDK Beginner’s Guide
- Scratch真好玩:教小孩學編程
- Learn Programming in Python with Cody Jackson
- Python高效開發實戰:Django、Tornado、Flask、Twisted(第3版)
- UML 基礎與 Rose 建模案例(第3版)
- 計算機應用基礎實踐教程
- 從零開始學C語言
- 深度學習:Java語言實現
- Scala for Machine Learning(Second Edition)
- Vue.js 2 Web Development Projects
- HTML5+CSS3+jQuery Mobile APP與移動網站設計從入門到精通
- 從零開始學Android開發
- Java Web應用開發給力起飛
- Android應用開發實戰(第2版)