Time-Series – Why Use the ACF of a Series to Define Its Joint Distribution?

autocorrelationself-studystate-space-modelstime series

I'm working on an unassessed course problem,

Consider the time series $y_t$ generated by the state space model with $x_t=1$, $F_t=\lambda$, $\sigma_2$, $Z_t=Z$, where the variances $\sigma^2,Z$ and the constant $\lambda$ are all known. Define the time series $$z_t=y_t-\lambda y_{t-1}.$$
(a) Write down the observation and transition equations of the state space model of $y_t$.

(b) By obtaining the mean and variance of $z_t$ together with the autocovariances $\text{Cov}(z_t,z_{t+k})$ for integer $k$ or otherwise, define the joint probability distribution of $\{z_t\}$.

The answer booklet has

(a) \begin{align} y_t & = \beta_t+\epsilon_t, \hspace{1em} \epsilon_t\sim \text{N}(0,\sigma^2), \\ \beta_t & = \lambda\beta_{t-1} + \zeta_t, \hspace{1em} \zeta_t \sim \text{N}(0,Z). \end{align}
(b) From (a) and from the definition of $z_t$ we have $$z_t=y_t-\lambda y_{t-1}=\beta_t+\epsilon_t-\lambda\beta_{t-1}-\lambda\epsilon_{t-1}=\epsilon_t-\lambda\epsilon_{t-1}+\zeta_t,$$ so, for all values of $\lambda$, the time series $\{z_t\}$ is weakly stationary (check it!), the joint distribution is multivariate normal (being a linear transformation of the series $\{\epsilon_t,\zeta_t\}$) and therefore it is defined completely by its first two moments, namely the mean of $z_t$ and the ACF of $z_t$. Since $$\mathbb{E}[z_t]=0,\hspace{1em}\mathbb{V}[z_t]=(1+\lambda^2)\sigma^2+Z, \\ \gamma_1=\text{Cov}[z_t,z_{t-1}]=-\lambda\sigma^2, \hspace{1em} \gamma_k=\text{Cov}[z_t,z_{t-k}]=0\;\forall\;|k|>0, \\ \rho_0=1, \hspace{1em} \rho_{\pm1}=-\frac{\lambda\sigma_2}{(1+\lambda^2)\sigma^2+Z}, \hspace{1em} \rho_k=0\;\forall\;|k|>1.$$

For (b), what does it add to find the acf? Why not just say $$z_t\sim\text{N}(\vec{0},(1+\lambda^2)\sigma^2+Z)?$$ Also (this might be a dumber question), what's the importance of showing that $z_t$ is stationary?

Best Answer

Why should we need the ACF?

The distribution you propose as the answer is just the marginal distribution of a single element of the time-series. If you want to extend this to get the joint distribution of a vector of values from the time series then you need to find the appropriate mean vector and variance matrix. The variance matrix is composed of the marginal variances of the elements of the time-series, plus the covariances between them, so it is fully determined by the marginal variance and the ACF. The answer provided derives both the variance/covariance elements and the ACF all in one go; bear in mind that once you know the marginal variance, the covariance and correlation values provide the same information.

What's the importance of showing that the time-series is stationary?

If the time-series is weakly stationary then this is enough to show that the mean vector and variance matrix of any vector of time-series values is invariant to shifting the times backward or forward. For a time-series that follows a multivariate normal distribution this is also sufficient to show strong stationarity (because the normal distribution is fully parameterised by its mean and variance), which means that the joint distribution of time-series values is invariant to shifting the times backward or forward. This means that you can give the joint distribution for a time-series vector and it depends on the times only through the lags between the elements of that time-series vector.

Note: A slight additional complication here is that the acronym ACF can stand for autocorrelation function or autocovariance function. In your question you are interpreting it as the former, which is the usual case.

Related Question