I try to reproduce with optim
the results from a simple linear regression fitted with glm
or even nls
R functions.
The parameters estimates are the same but the residual variance estimate and the standard errors of the other parameters are not the same particularly when the sample size is low. I suppose that this is due differences in the way the residual standard error is calculated between Maximum Likelihood and Least square approaches (dividing by n or by n-k+1 see bellow in the example).
I understand from my readings on the web that optimization is not a simple task but I was wondering if it would be possible to reproduce in a simple way the standard error estimates from glm
while using optim
.
Simulate a small dataset
set.seed(1)
n = 4 # very small sample size !
b0 <- 5
b1 <- 2
sigma <- 5
x <- runif(n, 1, 100)
y = b0 + b1*x + rnorm(n, 0, sigma)
Estimate with optim
negLL <- function(beta, y, x) {
b0 <- beta[1]
b1 <- beta[2]
sigma <- beta[3]
yhat <- b0 + b1*x
likelihood <- dnorm(y, yhat, sigma)
return(-sum(log(likelihood)))
}
res <- optim(starting.values, negLL, y = y, x = x, hessian=TRUE)
estimates <- res$par # Parameters estimates
se <- sqrt(diag(solve(res$hessian))) # Standard errors of the estimates
cbind(estimates,se)
> cbind(estimates,se)
estimates se
b0 9.016513 5.70999880
b1 1.931119 0.09731153
sigma 4.717216 1.66753138
Comparison with glm and nls
> m <- glm(y ~ x)
> summary(m)$coefficients
Estimate Std. Error t value Pr(>|t|)
(Intercept) 9.016113 8.0759837 1.116411 0.380380963
x 1.931130 0.1376334 14.030973 0.005041162
> sqrt(summary(m)$dispersion) # residuals standard error
[1] 6.671833
>
> summary(nls( y ~ b0 + b1*x, start=list(b0 = 5, b1= 2)))
Formula: y ~ b0 + b1 * x
Parameters:
Estimate Std. Error t value Pr(>|t|)
b0 9.0161 8.0760 1.116 0.38038
b1 1.9311 0.1376 14.031 0.00504 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 6.672 on 2 degrees of freedom
I can reproduce the different residual standard error estimates like this :
> # optim / Maximum Likelihood estimate
> sqrt(sum(resid(m)^2)/n)
[1] 4.717698
>
> # Least squares estimate (glm and nls estimates)
> k <- 3 # number of parameters
> sqrt(sum(resid(m)^2)/(n-k+1))
[1] 6.671833
Best Answer
The issues is that the standard errors comes from
$$\hat\sigma^2 (X^\top X)^{-1}$$
where $\hat\sigma^2$ is the unbiased estimator and not the MLE. See
summary.lm
This is the inverse observed Fisher information for $(\beta_0, \beta_1)$ conditional on $\hat\sigma^2$. Now the inverse observed Fisher information you compute is for the triplet $(\beta_0, \beta_1, \sigma)$. I.e., you use the MLE of $\sigma$ and not the unbiased estimator. Thus, I gather the standard errors should differ by factor $\sqrt{n/(n-3 + 1)}$ or something similar. This is the case
To elaborate more as usεr11852 requests, the log-likelihood is
$$l(\vec{\beta},\sigma) = -\frac{n}{2}\log(2\pi) - n\log{\sigma} - \frac{1}{2\sigma^2}(\vec{y}-X\vec\beta)^\top(\vec{y}-X\vec\beta)$$
where $X$ is the design matrix and $n$ is the number of observation. Consequently, the observed information matrix is
$$-\nabla_{\vec{\beta}}\nabla_{\vec{\beta}}^\top l(\vec{\beta},\sigma) = \frac{1}{\sigma^2}X^\top X$$
Now we can either plug in the MLE or the unbaised estimator of $\sigma$ as the following show
We can do the same with a QR decomposition as
lm
doesSo to answer
then you need to scale up the standard errors in the Gaussian example you use.