- Statistics for Machine Learning
- Pratap Dangeti
- 363字
- 2021-07-02 19:06:00
Maximum likelihood estimation
Logistic regression works on the principle of maximum likelihood estimation; here, we will explain in detail what it is in principle so that we can cover some more fundamentals of logistic regression in the following sections. Maximum likelihood estimation is a method of estimating the parameters of a model given observations, by finding the parameter values that maximize the likelihood of making the observations, this means finding parameters that maximize the probability p of event 1 and (1-p) of non-event 0, as you know:
probability (event + non-event) = 1
Example: Sample (0, 1, 0, 0, 1, 0) is drawn from binomial distribution. What is the maximum likelihood estimate of μ?
Solution: Given the fact that for binomial distribution P(X=1) = μ and P(X=0) = 1- μ where μ is the parameter:

Here, log is applied to both sides of the equation for mathematical convenience; also, maximizing likelihood is the same as the maximizing log of likelihood:

Determining the maximum value of μ by equating derivative to zero:

However, we need to do double differentiation to determine the saddle point obtained from equating derivative to zero is maximum or minimum. If the μ value is maximum; double differentiation of log(L(μ)) should be a negative value:

Even without substitution of μ value in double differentiation, we can determine that it is a negative value, as denominator values are squared and it has a negative sign against both terms. Nonetheless, we are substituting and the value is:

Hence it has been proven that at value μ = 1/3, it is maximizing the likelihood. If we substitute the value in the log likelihood function, we will obtain:

The reason behind calculating -2*ln(L) is to replicate the metric calculated in proper logistic regression. In fact:
AIC = -2*ln(L) + 2*k
So, logistic regression tries to find the parameters by maximizing the likelihood with respect to individual parameters. But one small difference is, in logistic regression, Bernoulli distribution will be utilized rather than binomial. To be precise, Bernoulli is just a special case of the binomial, as the primary outcome is only two categories from which all the trails are made.
- LabVIEW入門與實戰開發100例
- Mastering Entity Framework
- 算法基礎:打開程序設計之門
- Python從入門到精通(精粹版)
- The React Workshop
- C語言從入門到精通(第4版)
- PhpStorm Cookbook
- Mastering Predictive Analytics with Python
- Flux Architecture
- Learn React with TypeScript 3
- JavaScript 程序設計案例教程
- 從零開始學Linux編程
- 微服務架構深度解析:原理、實踐與進階
- Lift Application Development Cookbook
- Android開發進階實戰:拓展與提升