Python – Choosing the Best Covariance Correction for Standard Errors: Hansen-Hodrick or Newey-West

covariancepythonrobust-standard-errorsandwichstatsmodels

I am wondering what type of covariance correction for standard errors is better: Hansen-Hodrick or Newey-West?

Also, does someone know if StatsModels package that uses "HAC" for robust covariance uses Hansen-Hodrick or Newey-West correction for standard errors?

statsmodels.regression.linear_model.RegressionResults.get_robustcov_results

Best Answer

statsmodels uses by default Newey-West corrected standard errors with the usual Bartlett window.

There is a 'weights_func' or 'kernel' option to choose a different window than Bartlett, eg. uniform.

However, statsmodels has no other options for HAC robust standard errors like pre-whitening or automatic lag selection, or autocorrelation robust standard errors without heteroscedasticity robustness (i.e. only 'HAC', but no AC).

If I remember correctly Hansen-Hodrick is 'AC' with uniform kernel.