官术网_书友最值得收藏!

Chapter 3. Multiple Regression in Action

In the previous chapter, we introduced linear regression as a supervised method for machine learning rooted in statistics. Such a method forecasts numeric values using a combination of predictors, which can be continuous numeric values or binary variables, given the assumption that the data we have at hand displays a certain relation (a linear one, measurable by a correlation) with the target variable. To smoothly introduce many concepts and easily explain how the method works, we limited our example models to just a single predictor variable, leaving to it all the burden of modeling the response.

However, in real-world applications, there may be some very important causes determining the events you want to model but it is indeed rare that a single variable could take the stage alone and make a working predictive model. The world is complex (and indeed interrelated in a mix of causes and effects) and often it cannot be easily explained without considering various causes, influencers, and hints. Usually more variables have to work together to achieve better and reliable results from a prediction.

Such an intuition decisively affects the complexity of our model, which from this point forth will no longer be easily represented on a two-dimensional plot. Given multiple predictors, each of them will constitute a dimension of its own and we will have to consider that our predictors are not just related to the response but also related among themselves (sometimes very strictly), a characteristic of data called multicollinearity.

Before starting, we'd like to write just a few words on the selection of topics we are going to deal with. Though in the statistical literature there is a large number of publications and books devoted to regression assumptions and diagnostics, you'll hardly find anything here because we will leave out such topics. We will be limiting ourselves to discussing problems and aspects that could affect the results of a regression model, on the basis of a practical data science approach, not a purely statistical one.

Given such premises, in this chapter we are going to:

  • Extend the procedures for a single regression to a multiple one, keeping an eye on possible sources of trouble such as multicollinearity
  • Understand the importance of each term in your linear model equation
  • Make your variables work together and increase your ability to predict using interactions between variables
  • Leverage polynomial expansions to increase the fit of your linear model with non-linear functions
主站蜘蛛池模板: 襄城县| 潞城市| 湟源县| 开化县| 榆林市| 阿鲁科尔沁旗| 邹平县| 黑山县| 遂平县| 崇文区| 利川市| 太白县| 明溪县| 岳阳市| 汉中市| 海城市| 峨山| 烟台市| 林口县| 新和县| 库车县| 南安市| 双流县| 库尔勒市| 栾城县| 思茅市| 齐齐哈尔市| 邯郸市| 平南县| 苏州市| 达拉特旗| 绵阳市| 临西县| 南岸区| 中山市| 介休市| 镇沅| 宁武县| 平安县| 岳阳县| 安国市|