Solved – Breusch–Godfrey test under heteroskedasticity

autocorrelationf-testheteroscedasticityhypothesis testingvector-autoregression

Do I need to account for heteroskedasticity when performing the (vector) AR1-2 test?

The Autocorrelation (AR) 1-2 test is defined as follows – often referred to as the Breusch–Godfrey test (Wiki link):

The test is performed through the auxiliary regression of the
residuals on the original variables and lagged residuals (missing
lagged residuals at the start of the sample are replaced by zero, so
no observations are lost). Unrestricted variables are included in the
auxiliary regression. The null hypothesis is no autocorrelation, which
would be rejected if the test statistic is too high. This LM test is
valid for systems with lagged dependent variables and diagonal
residual autocorrelation, whereas neither the Durbin–Watson nor the
residual autocorrelations provide a valid test in that case.

I have a VAR model and I'm trying to determine the amount of lags to include.
My model suffers from heteroskedasticity so I'm using the Wald test to take that into account when doing inference. There is a large difference between the normal standard errors and the heteroskedasticity-consistent standard errors in my model.

I'm using OxMetrics and it returns the same AR1-2 test statistic both when I estimate the model with normal errors and heteroskedasticity-consistent errors. Is this because the test on the auxiliary regression is not affected by the heteroskedasticity in the main model or is it just because OxMetrics doesn't perform the right test in this case?

Best Answer

The Breusch-Godfrey test does not rely on the estimated standard errors, hence it does not matter whether you use heteroskedasticity-robust standard errors in your regressions or not.

Very short description of the BG test to check for AR(1) autocorrelation:

  1. Carry out the OLS regression and compute the residuals.
  2. Regress the residuals on the independent variables of your model and on the lagged residuals.
  3. Compute the test statistic by multiplying the R-squared of the second regression by your sample size.
  4. Compare the test statistic against the relevant Chi-Squared distribution.

As you can see, none of the steps above depend on how you estimate the standard errors, either in your "main" regression or in the "auxiliary" BG regression.

For further information, see here for a step-by-step explanation of the BG test. I remember that you can even download the data mentioned in the pdf somewhere on the site if you want to replicate the procedure.

Related Question