官术网_书友最值得收藏!

Logistic regression

As previously discussed, our classification problem is best modeled with the probabilities that are bound by 0 and 1. We can do this for all of our observations with some different functions, but here we'll focus on the logistic function. The logistic function used in logistic regression is as follows:

If you've ever placed a friendly wager on horse races or the World Cup, you may understand the concept better as odds. The logistic function can be turned to odds with the formulation of Probability (Y) / 1 - Probability (Y). For instance, if the probability of Brazil winning the World Cup is 20 percent, then the odds are 0.2 / 1 - 0.2, which is equal to 0.25, translating to odds of one in four.

To translate the odds back to probability, take the odds and divide by one plus the odds. The World Cup example is hence 0.25 / 1 + 0.25, which is equal to 20 percent. Additionally, let's consider the odds ratio. Assume that the odds of Germany winning the Cup are 0.18. We can compare the odds of Brazil and Germany with the odds ratio. In this example, the odds ratio would be the odds of Brazil divided by the odds of Germany. We'll end up with an odds ratio equal to 0.25/0.18, which is equal to 1.39. Here, we'll say that Brazil is 1.39 times more likely than Germany to win the World Cup.

One way to look at the relationship of logistic regression with linear regression is to show logistic regression as the log odds or log (P(Y)/1 - P(Y)) is equal to Bo + B1x. The coefficients are estimated using a maximum likelihood instead of the OLS. The intuition behind the maximum likelihood is that we're calculating the estimates for Bo and B1, which will create a predicted probability for an observation that's as close as possible to the actual observed outcome of Y, a so-called likelihood. The R language does what other software packages do for the maximum likelihood, which is to find the optimal combination of beta values that maximize the likelihood.

With these facts in mind, logistic regression is a potent technique to predict the problems involving classification and is often the starting point for model creation in such problems. Therefore, in this chapter, we'll attack the future problem with logistic regression first.

主站蜘蛛池模板: 芦溪县| 沁水县| 四平市| 汕头市| 綦江县| 武定县| 郓城县| 兰溪市| 杨浦区| 长子县| 甘泉县| 大方县| 东阿县| 新昌县| 栾城县| 湘乡市| 葫芦岛市| 镇沅| 资兴市| 梨树县| 西华县| 玛纳斯县| 衡山县| 灵山县| 天峻县| 孟津县| 大厂| 杨浦区| 嘉兴市| 合江县| 攀枝花市| 吉隆县| 尚义县| 龙岩市| 平昌县| 和田县| 玉环县| 通山县| 乌兰县| 聂拉木县| 盐池县|