Expected Value of Long Memory Moving Average – Probability and Time Series

pr.probabilitytime series

Let $X$ be an infinite moving average time series, i.e.
$$
X(t) = \sum_{k = -\infty}^\infty a_j \varepsilon_{t-j}, \quad t \in \mathbb{Z},
$$

where $\varepsilon_{j}$ are uncorrelated zero mean, finite variance and identically distributed random variables.

In my opinion, $\mathbb{E}X(t) = 0$ for all $t \in \mathbb{Z}$ is easy to prove with if $\sum_j \vert a_j \vert < \infty$. This is often referred to as short memory of the moving average, cf. Section 4.2.4 in [1].

But what happens if we only assume $\sum_j \vert a_j \vert^2 < \infty$? This is a common assumption, e.g. in Wold's decomposition, and can include both long and short memory moving averages. Is there some alternative way to prove $\mathbb{E}X(t) = 0$ for all $t \in \mathbb{Z}$ under these conditions?

References

[1] J. Beran, Y. Feng, S. Ghosh, and R. Kulik. Long-memory processes. Springer, 2016.

Best Answer

This is a slightly more elementary version of Anthony Quas's answer, without appealing to martingales.

By the condition $\sum_j|a_j|^2<\infty$, the sequence of random variables $X_n:=\sum_{j=-n}^n a_j\epsilon_{-j}$ is Cauchy convergent in $L^2$ and hence in $L^1$. So, $X_n$ converges to a limit $X(0)$ in $L^1$. Since $EX_n=0$ for all $n$, we have $EX(0)=0$.

(In view of Kolmogorov's maximal inequality, $X_n\to X(0)$ almost surely as well.)