Solved – How to test for wide-sense stationarity with only one sample path of the process

stationaritytime series

I have a univariate time series consisting of 70,000 observations (power consumption of a building) over equal time increments (15 minutes).

How do I check whether this realization is wide-sense stationary? Note that I don't know the formula of the underlying stochastic process, so I cannot calculate the mean function E[x(t)] or the autocovariance function E[(x(t1)-m(t1))(x(t2)-m(t2))]

(I'm aware of the definition of wide-sense/weak/covariance stationarity and I've read several threads on the topic here at Cross Validated, the closest one being this one.)

I'm arriving to the conclusion that checking a sample path for weak stationarity comes down to some heuristic tests.

The Engineering Statistics Handbook (which I like for its practical approach) states:

Stationarity can be defined in precise mathematical terms, but for our purpose we mean a flat looking series, without trend, constant variance over time, a constant autocorrelation structure over time and no periodic fluctuations.

(Also see the reply in this MATLAB Central thread.)

If you agree with this practical approach, then my comments/questions are:

  • We can easily do linear fitting to tell if there's a trend. We're good here.
  • But how do we test for a constant autocorrelation structure over time?
  • And how do we check for constant variance over time? Do we using moving windows, or non-overlapping windows? And what should the size of the window be?

I'm looking for some good rules of thumb to test with.

Thanks in advance for the help, and apologies for the long post. (First post here.)

Best Answer

Any test focuses on only 1-2 aspects of stationarity (e.g. unit root tests). In addition to the ideas suggested in other threads, you can do the following.

1] Regress $Y_t$ on predictors $\{g_1(t),...,g_p(t)\}$, where $g_1(),...,g_p()$ are some (non-)linear deterministic functions. Check joint statistical significance of the coefficients.

2] Back out residuals $\varepsilon_t$.

3] Regress $\varepsilon_t^2$ on predictors $\{g_1(t),...,g_p(t)\}$. Check joint statistical significance of the coefficients.

4] Regress $\varepsilon_t\varepsilon_{t-1}$ on predictors $\{g_1(t),...,g_p(t)\}$. Check joint statistical significance of the coefficients.

Related Question