官术网_书友最值得收藏!

Parameter estimates

In this section, we are going to discuss some of the algorithms used for parameter estimation.

Maximum likelihood estimation

Maximum likelihood estimation (MLE) is a method for estimating model parameters on a given dataset.

Now let us try to find the parameter estimates of a probability density function of normal distribution.

Let us first generate a series of random variables, which can be done by executing the following code:

> set.seed(100) 
> NO_values <- 100 
> Y <- rnorm(NO_values, mean = 5, sd = 1) 
> mean(Y) 

This gives 5.002913.

> sd(Y) 

This gives 1.02071.

Now let us make a function for log likelihood:

LogL <- function(mu, sigma) { 
+      A = dnorm(Y, mu, sigma) 
+      -sum(log(A)) 
+  } 

Now let us apply the function mle to estimate the parameters for estimating mean and standard deviation:

  > library(stats4) 
> mle(LogL, start = list(mu = 2, sigma=2)) 

mu and sigma have been given initial values.

This gives the output as follows:

Maximum likelihood estimation

Figure 2.13: Output for MLE estimation

NaNs are produced when negative values are attempted for the standard deviation.

This can be controlled by giving relevant options, as shown here. This ignores the warning messages produced in the output displayed in Figure 2.13:

> mle(LogL, start = list(mu = 2, sigma=2), method = "L-BFGS-B", 
+  lower = c(-Inf, 0), 
+       upper = c(Inf, Inf)) 

This, upon execution, gives the best possible fit, as shown here:

Maximum likelihood estimation

Figure 2.14: Revised output for MLE estimation

Linear model

In the linear regression model, we try to predict dependent/response variables in terms of independent/predictor variables. In the linear model, we try to fit the best possible line, known as the regression line, though the given points. The coefficients for the regression lines are estimated using statistical software. An intercept in the regression line represents the mean value of the dependent variable when the predictor variable takes the value as zero. Also the response variable increases by the factor of estimated coefficients for each unit change in the predictor variable. Now let us try to estimate parameters for the linear regression model where the dependent variable is Adj.Close and independent variable is Volume of Sampledata. Then we can fit the linear model as follows:

> Y<-Sampledata$Adj.Close 
> X<-Sampledata$Volume 
> fit <- lm(Y ~ X) 
> summary(fit) 

Upon executing the preceding code, the output is generated as given here:

Linear model

Figure 2.15: Output for linear model estimation

The summary display shows the parameter estimates of the linear regression model. Similarly, we can estimate parameters for other regression models such as multiple or other forms of regression models.

主站蜘蛛池模板: 安康市| 菏泽市| 高邮市| 阿拉善盟| 苏尼特右旗| 喜德县| 江都市| 灵武市| 勃利县| 英山县| 清涧县| 高淳县| 靖西县| 灯塔市| 喜德县| 乌兰察布市| 定结县| 汝州市| 马公市| 福安市| 尚义县| 兴业县| 韩城市| 屯门区| 安图县| 清河县| 定兴县| 唐山市| 乌苏市| 禄丰县| 沙坪坝区| 娱乐| 焉耆| 辽宁省| 横峰县| 罗定市| 广昌县| 苗栗市| 清镇市| 平南县| 思茅市|