Time Series – Example of a Series Without a Unit Root That is Non-Stationary

augmented-dickey-fullerstationaritytime seriesunit root

I've seen several times people reject the null in an augmented Dickey-Fuller test, and then claim that it shows their series is stationary (unfortunately, I cannot show the sources of these claims, but I imagine similar claims exist here and there in one or another journal).

I contend that it's a misunderstanding (that rejection of the null of a unit root is not necessarily the same thing as having a stationary series, especially since alternative forms of nonstationarity are rarely investigated or even considered when such tests are done).

What I seek is either:

a) a nice clear counterexample to the claim (I can imagine a couple right now but I bet someone other than me will have something better than what I have in mind). It could be a description of a specific situation, perhaps with data (simulated or real; both have their advantages); or

b) a convincing argument why rejection in an augmented Dickey-Fuller should be seen as establishing stationarity

(or even both (a) and (b) if you're feeling clever)

Best Answer

Here is an example of a non-stationary series that not even a white noise test can detect (let alone a Dickey-Fuller type test):

this is not white noise

Yes, this might be surprising but This is not white noise.

Most non-stationary counter example are based on a violation of the first two conditions of stationary: deterministic trends (non-constant mean) or unit root / heteroskedastic time series (non-constant variance). However, you can also have non-stationary processes that have constant mean and variance, but they violate the third condition: the autocovariance function (ACVF) $cov(x_s, x_t)$ should be constant over time and a function of $|s-t|$ only.

The time series above is an example of such a series, which has zero mean, unit variance, but the ACVF depends on time. More precisely, the process above is a locally stationary MA(1) process with parameters such that it becomes spurious white noise (see References below): the parameter of the MA process $x_t = \varepsilon_t + \theta_1 \varepsilon_{t-1}$ changes over time

$$\theta_1(u) = 0.5 - 1 \cdot u,$$

where $u = t/ T$ is normalized time. The reason why this looks like white noise (even though by mathematical definition it clearly isn't), is that the time varying ACVF integrates out to zero over time. Since the sample ACVF converges to the average ACVF, this means that the sample autocovariance (and autocorrelation (ACF)) will converge to a function that looks like white noise. So even a Ljung-Box test won't be able to detect this non-stationarity. The paper (disclaimer: I am the author) on Testing for white noise against locally stationary alternatives proposes an extension of Box tests to deal with such locally stationary processes.

For more R code and more details see also this blog post.

Update after mpiktas comment:

It is true that this might look just like a theoretically interesting case that is not seen in practice. I agree it is unlikely to see such spurious white noise in a real world dataset directly, but you will see this in almost any residuals of a stationary model fit. Without going into too much theoretical detail, just imagine a general time-varying model $\theta(u)$ with a time varying covariance function $\gamma_{\theta}(k, u)$. If you fit a constant model $\widehat{\theta}$, then this estimate will be close to the time average of the true model $\theta(u)$; and naturally the residuals will now be close to $\theta(u) - \widehat{\theta}$, which by construction of $\widehat{\theta}$ will integrate out to zero (approximately). See Goerg (2012) for details.

Let's look at an example

library(fracdiff)
library(data.table)

tree.ring <- ts(fread(file.path(data.path, "tree-rings.txt"))[, V1])
layout(matrix(1:4, ncol = 2))
plot(tree.ring)
acf(tree.ring)
mod.arfima <- fracdiff(tree.ring)
mod.arfima$d


## [1] 0.236507

So we fit fractional noise with parameter $\widehat{d} = 0.23$ (since $\widehat{d} < 0.5$ we think everything is fine and we have a stationary model). Let's check residuals:

arfima.res <- diffseries(tree.ring, mod.arfima$d)
plot(arfima.res)
acf(arfima.res)

time series and acf plot

Looks good right? Well, the issue is that the residuals are spurious white noise. How do I know? First, I can test it

Box.test(arfima.res, type = "Ljung-Box")
## 
##  Box-Ljung test
## 
## data:  arfima.res
## X-squared = 1.8757, df = 1, p-value = 0.1708

Box.test.ls(arfima.res, K = 4, type = "Ljung-Box")
## 
##  LS Ljung-Box test; Number of windows = 4; non-overlapping window
##  size = 497
## 
## data:  arfima.res
## X-squared = 39.361, df = 4, p-value = 5.867e-08

and second, we know from literature that the tree ring data is in fact locally stationary fractional noise: see Goerg (2012) and Ferreira, Olea, and Palma (2013).

This shows that my -- admittedly -- theoretically looking example, is actually occurring in most real world examples.

Related Question