Solved – What’s the name for a time series with constant mean

stationaritystochastic-processesterminologytime series

Consider a random process $\{X_t\}$ for which the mean $\mathbb{E}(X_t)$ exists, and is constant, for all times $t$, i.e. $\mathbb{E}(X_t)=\mathbb{E}(X_{t+\tau})$ for all times $t$ and time shifts (or "lags") $\tau$. I impose no further conditions on higher moments nor on the distribution function. How can I describe such a process? It is only stationary in a weaker sense than "weakly stationary" (i.e. second-order stationarity).

Other forms of stationarity have many names — I could also add "wide-sense stationary" or "covariance stationary" for the weak case, for example. So I'd expect several possible terms to be applicable, but all ones I can think of have drawbacks.

  • First-order stationary, or stationary to order one, is analogous to "second-order stationary" and the "stationary to order $n$" formulation often used for higher moments. But while I have seen "first-order stationary" used for processes with constant mean (e.g. here) it is commonly used with a different meaning in signal processing, the field that provides the majority of search engine hits. Every signal processing book I checked defined a process to be first-order stationary iff the first-order distribution function is invariant over time, i.e. $F_{X(t)}(x)=F_{X(t+\tau)}(x)$ for all times $t$, shifts $\tau$ and values $x$. This is quite a different condition to requiring an invariant mean — so long as the mean exists, it is a far stricter condition. They also defined "second-order stationarity" to refer to the second-order distribution function satisfying $$F_{X(t_1),X(t_2)}(x_1, x_2) = F_{X(t_1 + \tau),X(t_2 + \tau)}(x_1, x_2)$$ for all times $t_1$, $t_2$, shifts $\tau$, and values $x_1$ and $x_2$; this is (assuming the appropriate moments exist) a stronger condition than requiring means and covariance at any given lag to be independent of time, for which they reserved the term "wide-sense stationarity". Clearly one must be eagle-eyed whether "$n^\text{th}$-order" refers to distributions or moments, with great potential for confusion. As far I can see "first-order stationary", in particular, is mostly used in the distributional sense. Perhaps we can disambiguate, but I found no search engine hits for e.g. "first-order moment stationary" and just one relevant hit for "first-moment stationary".

  • Mean stationary may work by analogy to "covariance stationary", but I found it hard to establish prior usage. Search results were swamped by "zero-mean stationary process", which is quite different. I did find about a dozen relevant results for mean-value stationary being used in the sense I desire, too low to be the conventional terminology.

  • Constant level seems at first sight quite unambiguous, since "level" is widely understood to refer to "mean response" (e.g. in a regression context). However, take a random walk (without drift) $X_t = \sum_{i=0}^{t}\varepsilon_i$ where $\{\varepsilon_i\} \sim \text{WN}(0, \sigma^2)$. We know that in the population $\mathbb{E}(X_t)=0$ for all $t$, yet, in any particular realisation of $\{X_t\}$, the persistence of shocks produces a "drunkard's walk" which can stray far from the mean. When we can see multiple realisations, as illustrated, the fact the true mean remains zero is clearer; if we saw only one particular sample then, for most of the series below, "constant level" would not be the description that immediately springs to mind! Moreover, the search term "constant level time series" in Google scholar found only two papers, so it doesn't seem to be used in an adjectival way.

Eight simulated random walks using sum of WN(0,1)

How might I fill out the sentences "$X_t$ is a […] process" or "$X_t$ is […]" in a clear and unambiguous manner? Is there another term I have missed, or will one of the above — perhaps after suitable clarification — work well enough? I thought "first-moment stationary" had admirable clarity but its usage is clearly in the minority; I liked "mean stationary" for similar reasons, but found it hard to establish evidence of prior use.

Best Answer

I suspect there is no general term that will cover all cases. Consider, for example, a white noise generator. In that case, we would just call it white noise. Now if the white noise comes from a natural source, e.g., AM radio band white noise, then it has effects including superimposed diurnal, seasonal, and sun-spot (11 year) solar variability, and man made primary and beat interference from radio broadcasts.

For example, the graph in the link mentioned by the OP looks like amplitude modulated white noise, almost like an earthquake. I personally would examine such a curve in the frequency and or phase domain, and describe it as an evolution of such in time because it would reveal a lot more about the signal structure by direct observation of how the amplitudes over a set of ranges of frequencies evolve in time with respect to detection limits as opposed to thinking about stationarity, mainly by reason of conceptual compactness. I understand the appeal of statistical testing. However, it would take umpteen tests and oodles of different criteria, as in the link, to incompletely describe an evolving frequency domain concept making the attempt at developing the concept of stationarity as a fundamental property seem rather confining. How does one go from that to Bode plotting, and phase plotting?

Having said that much, signal processing becomes more complicated when a "primary" violation of stationarity occurs; patient dies, signal stops, random walk continues, and so forth. Such processes are easier to describe as a non-stationarity than variously as an infinite sum of odd harmonics, or a decreasing to zero frequency. The OP complaint about not having much literature to document secondary stationarity is entirely reasonable; there does not seem to be complete agreement as to what even constitutes ordinary stationarity. For example, NIST claims that "A stationary process has the property that the mean, variance and autocorrelation structure do not change over time." Others on this site claim that "Autocorrelation doesn't cause non-stationarity," or using mixture distributions of RV's that "This process is clearly not stationary, but the autocorrelation is zero for all lags since the variables are independent." This is problematic because auto-non-correlation is typically "tacked-on" as an additional criterion of non-stationarity without much consideration given to how necessary and sufficient that is for defining a process. My advice on this would be first observe a process, and then to describe it, and to use phrases crouched in modifiers such as, "stationary/non-stationarity with respect to" as the alternative is to confuse many readers as to what is meant.