Solved – What’s the model representation for the first difference of a local level model

kalman filterself-studystate-space-modelstime series

This is my first exercise for space state models and I've a few questions I'd need to resolve before I actually start doing the exercise. Unfortunately, I'm self teaching (I have no professor to ask) and I'm afraid there's no solution companion for Durbin and Koopman (2012)!

Exercise 2.13.1 from Time Series Analysis by State Space Methods Second Edition

Consider the local level model (2.3).

(a) Give the model representation for $x_t = y_t – y_{t-1}$, for $t = 2, …, n$.

(b) Show that the model for $x_t$ in (a) can have the same statistical properties as the model given by $x_t = \epsilon_t + \theta \epsilon_{t-1}$ where $\epsilon \sim N(0, \sigma_{\epsilon}^2)$ are independent disturbances with variance $\sigma_{\epsilon}^2 > 0$ and for some value $\theta$.

(c) For what value of $\theta$, in terms of $\sigma_{\epsilon}^2$ and $\sigma_{\eta}^2$, are the model representations for $x_t$ in (a) and (b) equivalent? Comment.

For the record, the local level model (2.3) is given by:

$y_t = \alpha_t + \epsilon_t \quad\quad \epsilon_t \sim N(0, \sigma_{\epsilon}^2)$

$\alpha_{t+1} = \alpha_t + \eta_t \quad\quad \eta_t \sim N(0, \sigma_{\eta}^2)$

Doubts about (a)

First of all, the model proposed in (a) looks like noise (which makes perfect sense since it's the first difference of a random walk). Is the following representation correct?

$$ x_t = y_t – y_{t-1} = \alpha_t + \epsilon_t – \alpha_{t-1} – \epsilon_{t-1} $$
$$ x_t = \alpha_{t-1} + \eta_{t-1} + \epsilon_t – \alpha_{t-1} – \epsilon_{t-1} $$
$$ x_t = \eta_{t-1} + \epsilon_{t} – \epsilon_{t-1} $$

This makes me doubt. First, state disturbance $\eta_{t-1}$ is now part of the observation equation. Second, what does the state equation mean now that the observation equation doesn't relate to the unobserved states $\alpha_t$? Third, and somehow related, what's the mean unobserved state now that $\alpha_t$ isn't anymore on the formula? Zero?

Doubts about (b)

Additionally, I wonder how to show that models have the same statistical properties. What do you have to prove to say they're the same? Same expected value and variance of observation $x_t$, unobserved state $\alpha_t$, prediction error $v_t = x_t – a_t $, filtered unobserved state, updated unobserved state, etc.? Since all random variables are Normal, I guess showing the first two moments match is enough, but a) what distribution (marginal, conditionals, conditionals on what?) of b) what variables (observed, hidden state, prediction error, etc.) should be equal?

Any comment is much appreciated!

Update

This is where I got after the hints provided by @Glen_b and @javlacalle.

(a)

$$ x_t = \eta_{t-1} + \epsilon_t – \epsilon_{t-1}$$

(b)

Respect to model $x_t$ given in (a)

$$ E[x_t | x_{t-1}] = 0 $$
$$ \gamma(0) = Var(x_t | x_{t-1}) = \sigma_{\eta}^2 + 2\sigma_{\epsilon}^2 $$
$$ \gamma(1) = Cov(x_t, x_{t-1}) = -\sigma_{\epsilon}^2 $$
$$ \gamma(2) = Cov(x_t, x_{t-2}) = 0 $$
$$ \rho(1) = \frac{-\sigma_{\epsilon}^2}{\sigma_{\eta}^2 + 2\sigma_{\epsilon}^2} $$
$$ \rho(2) = 0 $$

Respect to model $x_t$ proposed in (b), which I renamed to $z_t$ to avoid confusion

$$ E[z_t | z_{t-1}] = 0 $$
$$ \gamma(0) = Var(z_t | z_{t-1}) = \sigma_{\epsilon}^2 (1 + \theta^2) $$
$$ \gamma(1) = Cov(z_t, z_{t-1}) = \theta \sigma_{\epsilon}^2 $$
$$ \gamma(2) = Cov(z_t, z_{t-2}) = 0 $$
$$ \rho(1) = \frac{\theta}{1 + \theta^2} $$
$$ \rho(2) = 0 $$

(c)

$$ E[x_t | x_{t-1}] = E[z_t | z_{t-1}] = 0 \quad \qquad (c.1) $$

$$ \gamma_{x_t}(0) = \gamma_{z_t}(0) \leftrightarrow \sigma_{\eta}^2 + 2\sigma_{\epsilon}^2 = \sigma_{\epsilon}^2 (1 + \theta^2) \quad \quad (c.2) $$

$$ \gamma_{x_t}(1) = \gamma_{z_t}(1) \leftrightarrow -\sigma_{\epsilon}^2 = \theta \sigma_{\epsilon}^2 \rightarrow \theta = -1 \quad \quad (c.3) $$

$$ \gamma_{x_t}(2) = \gamma_{z_t}(2) = 0 \quad \quad (c.4) $$

$$ \rho_{x_t}(1) = \rho_{z_t}(1) \leftrightarrow \frac{-\sigma_{\epsilon}^2}{\sigma_{\eta}^2 + 2\sigma_{\epsilon}^2} = \frac{\theta}{1 + \theta^2} \quad \quad (c.5) $$

$$ \rho_{x_t}(2) = \rho_{z_t}(2) = 0 \quad \quad (c.6) $$

Equations c.1, c.4 and c.6 imply no restrictions for $\theta$, but equations c.2, c.3 and c.5 are clearly not consistent.

Best Answer

You have arrived to the stationary form of the local level model:

$$ \Delta y_t \equiv x_t = \underbrace{\Delta \alpha_t}_{\eta_{t-1}} + \Delta \epsilon_t \,, $$

where $\Delta$ is the difference operator such that $\Delta y_t = y_t - y_{t-1}$.

Now, I think it is easier to first check the statistical properties (mean, covariances, autocorrelations) of this stationary form.

For example, the mean of this process is given by:

$$ \hbox{E}[x_t] = \hbox{E}[\eta_{t-1}] + \hbox{E}[\epsilon_t] - \hbox{E}[\epsilon_{t-1}] = 0 + 0 - 0 = 0 \,. $$

You can do the same to obtain the covariances of order $k$, $\gamma(k)$:

\begin{eqnarray} \begin{array}{ll} \gamma(0) &=& E\left[(\eta_{t-1} + \epsilon_t - \epsilon_{t-1})^2\right] = \dots \\ \gamma(1) &=& E\left[(\eta_{t-1} + \epsilon_t - \epsilon_{t-1})(\eta_{t-2} + \epsilon_{t-1} - \epsilon_{t-2})\right] &=& \dots \\ \gamma(2) &=& \cdots \\ \gamma(>2) &=& \cdots \end{array} \end{eqnarray}

You just need to take the expectation of the cross-products of all terms bearing in mind that $\eta_t$ and $\epsilon_t$ are independently distributed, they are independent of each other and the variance of each one are respectively $\sigma^2_\eta$ and $\sigma^2_\epsilon$.

Then, it will be straightforward to get the expression of the autocorrelations of order $k>0$, $\rho(k) = \frac{\gamma(k)}{\gamma(0)}$. This will have a form that is characteristic of a moving-average of order 1, MA(1) (the autocorrelations are zero for $k>1$) and, hence, $x_t$ can be represented as a MA(1) process and $y_t$ as an ARIMA(0,1,1) process.

In order to find out the relationship between the parameters of the local level model and the MA coefficient, you can equate the expression of the first order autocorrelation obtained before with the expression of the first order autocorrelation of a MA(1). Following the same strategy as above, you can find that $\rho(1)$ for a MA(1) with coefficient $\theta$ is given by $\rho(1) = \theta/(1 + \theta^2)$. The expression that you get by doing this will also reveal that the local level model is a restricted ARIMA(0,1,1) model where the MA coefficient $\theta$ can take only negative values.

Edit

Equation (c.5) is okay. You can get the relationship between the parameters of the local level model and the MA coefficient solving the equation (c.5) for $\theta$. You can rewrite it as a quadratic equation to be solved for $\theta$. One of the solutions can be discarded because it implies a non-invertible MA, $|\theta|>1$.

When solving this equation, it will be helpful to define $q=\sigma^2_\eta/\sigma^2_\epsilon$. Also, check that $\frac{\sqrt{\sigma^4_\eta + 4\sigma^2_\eta\sigma^2_\epsilon}}{2\sigma^2_\epsilon} = \frac{\sqrt{q^2 + 4q}}{2}$. This way you will get a more neat expression. Then, given that $0 < q < \infty$, you can check that the range of possible values for $\theta$ are zero or negative values.

Related Question