- Advanced Machine Learning with R
- Cory Lesmeister Dr. Sunil Kumar Chinnamgari
- 346字
- 2021-06-24 14:24:36
Reverse transformation of natural log predictions
Now that you have read Duan's paper several times, here's how to apply to our work. I'm going to provide you with a user-defined function. It will do the following:
- Exponentiate the residuals from the transformed model
- Exponentiate the predicted values from the transformed model
- Calculate the mean of the exponentiated residuals
- Calculate the smeared predictions by multiplying the values in step 2 by the value in step 3
- Return the results
Here's the function, which requires only two arguments:
> duan_smear <- function(pred, resid){
expo_resid <- exp(resid)
expo_pred <- exp(pred)
avg_expo_resid <- mean(expo_resid)
smear_predictions <- avg_expo_resid * expo_pred
return(smear_predictions)
}
Next, we calculate the new predictions from the results of the MARS model:
> duan_pred <- duan_smear(pred = earth_pred, resid = earth_residTest)
We can now see how the model error plays out at the original sales price:
> caret::postResample(duan_pred, test_y)
RMSE Rsquared MAE
23483.5659 0.9356 16405.7395
We can say that the model is wrong, on average, by $16,406. How does that compare with not smearing? Let's see:
> exp_pred <- exp(earth_pred)
> caret::postResample(exp_pred, test_y)
RMSE Rsquared MAE
23106.1245 0.9356 16117.4235
The error is slightly less so, in this case, it just doesn't seem to be the wise choice to smear the estimate. I've seen examples where Duan's method, and others, are combined in an ensemble model. Again, more on ensembles later in this book.
Let's conclude the analysis by plotting the non-smeared predictions alongside the actual values. I'll show how to do this in ggplot fashion:
> results <- data.frame(exp_pred, test_y)
> colnames(results) <- c('predicted', 'actual')
> ggplot2::ggplot(results, ggplot2::aes(predicted, actual)) +
ggplot2::geom_point(size=1) +
ggplot2::geom_smooth() +
ggthemes::theme_fivethirtyeight()
The output of the preceding code is as follows:

This is interesting as you can see that there's almost a subset of actual values that have higher sales prices than we predicted with their counterparts. There's some feature or interaction term that we could try and find to address that difference. We also see that, around the $400,000 sale price, there's considerable variation in the residuals—primarily, I would argue, because of the paucity of observations.
For starters, we have a pretty good model and serves as an excellent foundation for other modeling efforts as discussed. Additionally, we produced a model that's rather simple to interpret and explain, which in some cases may be more critical than some rather insignificant reduction in error. Hey, that's why you make big money. If it were easy, everyone would be doing it.
- 觸摸屏實用技術與工程應用
- 精選單片機設計與制作30例(第2版)
- Learning Game Physics with Bullet Physics and OpenGL
- 嵌入式系統中的模擬電路設計
- OUYA Game Development by Example
- Spring Cloud微服務架構實戰
- 筆記本電腦維修實踐教程
- Managing Data and Media in Microsoft Silverlight 4:A mashup of chapters from Packt's bestselling Silverlight books
- Hands-On Motion Graphics with Adobe After Effects CC
- Istio實戰指南
- 微服務實戰
- The Machine Learning Workshop
- ARM接口編程
- Exceptional C++:47個C++工程難題、編程問題和解決方案(中文版)
- Applied Supervised Learning with R