Solved – the difference between serial correlation and having a unit root

autocorrelationtime seriesunit root

I may be mixing up my time series and non time series concepts, but what is the difference between a regression model that exhibits serial correlation and a model that exhibits a unit root?

In addition, why is it that you can use a Durbin-Watson test to test for serial correlation, but must use a Dickey-Fuller test for unit roots? (My textbook says this is because the Durbun Watson Test cannot be used in models that include lags in the independent variables.)

Best Answer

A simpler explanation can be this: if you have an AR(1) process $$y_t = \rho y_{t-1} + \epsilon_t,$$ where $\epsilon_t$ is white noise, then testing for autocorrelation is $H_{0;\mbox{AC}}: \rho=0$ (and you can run OLS which behaves properly under the null), while testing for the unit root is $H_{0;\mbox{UR}}: \rho=1$. Now, with the unit root, the process is non-stationary under the null, and OLS utterly fails, so you have to go into the Dickey-Fuller trickery of taking the differences and such.