Solved – Using HAC standard errors although there might be no autocorrelation

least squaresrobustrobust-standard-errorstandard errortime series

I'm running a couple of regressions and, as I wanted to be on the safe side, decided to use HAC (heteroskedasticity & autocorrelation consistent) standard errors throughout. There might be a few cases where serial correlation is not present. Is this anyways a valid approach? Are there any drawbacks?

Best Answer

Loosely, when estimating standard errors:

  • If you assume something is true and it isn't true, you generally lose consistency. (This is bad. As the number of observations rises, your estimate need not converge in probability to the true value.) Eg. when you assume observations are independent and they aren't, you can massively understate standard errors.
  • If you don't assume something is true and it is true, you generally lose some efficiency (i.e. your estimator is more noisy than necessary.) This often isn't a huge deal. Defending your work in seminar tends to be easier if you've been on the conservative side in your assumptions.

If you have enough data, you should be entirely safe since the estimator is consistent!

As Woolridge points out though in his book Introductory Econometrics (p.247 6th edition) a big drawback can come from small sample issues, that you may be effectively dropping one assumption (i.e. no serial correlation of errors) but adding another assumption that you have enough data for the Central Limit Theorem to kick in! HAC etc... rely on asymptotic arguments.

If you have too little data to rely on asymptotic results:

  • The "t-stats" you compute may not follow the t-distribution for small samples. Consequently, the p-values may be quite wrong.
  • But if the errors truly are normal, homoskedastic, IID errors then the t-stats you compute, under the classic small sample assumptions, will follow the t-distribution precisely.

See this answer here to a related question: https://stats.stackexchange.com/a/5626/97925

Related Question