Solved – Unit Root testing and stationarity of a time series

augmented-dickey-fullerkpss-testtime seriesunit root

I'm trying to understand:

  1. how is check for stationarity(or lack thereoff) linked to unit root testing. More so the logic of it. i understand the null hypothesis used in adf or kpss but I need the logic

  2. Why do we include constant and trend term in some instances – I somewhat understand including the trend because if there is a trend in the series we wish to see whether the series is stationary along the trend. Is that correct? and then why include constant?

  3. why must be evaluate autocorrelation between independent variable
    when using regression on time series data. The assumption
    of OLS is that errors should be independently distributed, so
    checking autocorrelation of residuals I can understand, but why
    check for autocorrelation between independent variables

Best Answer

Consider this process $$x_t=c+\varphi x_{t-1}+\varepsilon_t$$, with usual assumption of $$E[\varepsilon_t]=0$$

Assuming it's stationary, obtain the mean $$E[x_t]=c+\varphi E[x_{t-1}]$$ Hence, $$E[x_t]=\frac{c}{1-\varphi}$$

This works only when $|\varphi|<1$. If $\varphi=1$ this doesn't work, it blows up, because $E[x_t]\ne E[x_{t-1}]$, i.e. the process is not stationary.

That's what unit root test does, it gets the differences $\Delta x_t$ and regresses them on lagged $x_{t-1}$:$$\Delta x_t=c+(1-\varphi)x_{t-1}+\varepsilon_t$$ Then it tests whether $1-\varphi=0$. If it's zero then we have a unit-root, i.e. non-stationary process.

The constant refers to the term $c$.

The trend is easy too: $$x_t=c+\alpha t + \varphi x_{t-1}+\varepsilon_t$$

In this case $$E[x_t]=\frac{c+\alpha t}{1-\varphi}$$ So, if you have a trend then the process is non-stationary, but if you account for the time trend, it's still stationary around the line.

Related Question