Solved – Bootstrap estimate for standard error in linear regression

bootstrapregressionself-study

I came across the bootstrap concept in the book Introduction to statistical learning, wherein the standard error for the linear regression coefficients is estimated both by the formula and the bootstrap process, and then following this paragraph (page 196):

Now although the formula for the standard errors do not rely on the linear model being correct, the estimate for $\sigma^2$ does. We see […] that there is a non-linear relationship in the data, and so the residuals from a linear fit will be inflated, and so will $\hat\sigma^2$. […] The bootstrap approach does not rely on any of these assumptions, and so it is likely giving a more accurate estimate of the standard errors of $\hat\beta_0$ and $\hat\beta_1$ than is the summary() function. [Emphasis added.]

So my question is why are the assumptions reducing the reliability or accuracy of the estimates? Because more is the residual error more is your standard error of estimates (indirectly due to inflated $\hat\sigma^2$) this assumption doesn't seem wrong to me.

Best Answer

Sure a worse fit gives larger residuals, which gives a larger standard error estimate. But this is not why we compute the standard error estimate. If all you care about is fit, you don't need to bother with the standard error of $\hat \beta$ at all. Just use $\hat\sigma^2$ directly, which is the MSE of your model. We use $\text{s.e}(\hat \beta)$ for inference: to say something about how well we have estimated $\beta$. In short to construct confidence intervals around our point estimate.

If we want to trust these intervals and their relation to the population of $\beta$s, $\text{s.e.}(\hat\beta)$ should be correct. It should be the standard deviation of whatever distribution $\hat \beta$ has. If the model is specified correctly, we can use the standard formulas to estimate $\text{s.e.}(\hat\beta)$ directly, and we know exactly which properties $\hat\beta$ has. These properties and formulas — and hence our inferences about $\hat\beta$ — are directly derived from the model assumptions. If the model isn't correctly specified, all our beautiful and clean theory goes out the window.

Elsewhere in the ISLR book you can read that an approximate 95% confidence interval for $\hat \beta$ is

$$[\hat\beta - 2\cdot\text{s.e.}(\hat\beta), \hat\beta + 2\cdot\text{s.e.}(\hat\beta)].$$

This is true if you have a good standard error estimate, but if the estimate is too small/large, the confidence interval is too tight/wide. You can no longer have 95% confidence in your 95% confidence interval.

A simulation study

Below is an illustration in code and figures. The examples in ISLR use some data that look quadratic. I will simulate some data so that we know what the truth is. My true model is $y = 4 + 5x -3x^2 + \epsilon$, where $\epsilon \sim N(0,1)$. This is classic linear regression, the big assumption is that errors should conditional on $x$ be iid from a normal distribution with mean zero. A quadratic model is the perfect fit. A linear model is a poor fit.

The figure below shows some simulated data in grey, the true model in black, and a fitted linear regression in red. The errors are not mean-zero normal conditional on $x$: they are consistently below zero to the right and to the left, and consistently above zero in the middle.

library(boot)
library(plyr)

set.seed(22042017) # for reproducibility
generate_data <- function(nsamples=100) {
  x <- rnorm(nsamples,mean=.75, sd=.5)
  y <-  4 + 5*x -3*x^2 + rnorm(length(x), mean=0, sd = 1)
  data.frame(x,y)
}


dat <- generate_data()
plot(dat, pch=20, col="grey")
curve(4 + 5*x -3*x^2, add=T, lwd=1.5)
abline(lm(y~x, data=dat), col="red", lwd=1.5)

To explore the behavior of a bootstrap estimate of s.e. as compared to the standard parametric estimate I have included a small simulation. First I generate a data set similar to above, I then estimate standard error for the coefficient of $x$ with both the parametric assumptions in lm() and with the bootstrap. This gives us a i) distribution over how the standard estimate behaves here and ii) a distribution over how the bootstrap estimate behaves. I also calculate the "true" s.e. of $\hat \beta_x$ by repeatedly generating data and calculating the beta. I can then take the standard deviation of all these betas as the truth.

# for the bootstrap
bootfun <- function(data, index) {
  coef(lm(y~x, data=dat, subset = index))
}

# simulation: compare 
experiment <- plyr::raply(1000, {
  dat <- generate_data()

  sig_lm <- coef(summary(lm(y~x, data=dat)))[2, 2]
  sig_boot <- sd(boot(dat, bootfun, 100)$t[, 2])

  c(lm=sig_lm, boot=sig_boot)
})


# The "real" s.e. of beta estimates
sampling <- raply(10000, function() {
  dat <- generate_data()
  coef(lm(y~x, data=dat))[2]
})
truth <- sd(sampling)


d_lm <- density(experiment[, 1])
d_bt <- density(experiment[, 2])

xl <- range(d_lm$x, d_bt$x)
yl <- range(d_lm$y, d_bt$y)

plot(d_lm, xlim=xl, ylim=yl)
lines(d_bt, lty=2)
abline(v=truth, col="red")

Created on 2019-10-25 by the reprex package (v0.2.1)

Standard estimate is the solid line, bootstrap is the dashed line, in red we see the truth. The standard estimate greatly underestimates the and the bootstrap somewhat underestimates. The bootstrap gets much closer on average and at least some times is in the right area.

Related Question