Solved – Stationarity of AR(1) process, stable filter

autoregressivefilterstationaritystochastic-processestime series

This section of the Wikipedia article about the Autoregressive Model reads:

An AR(1) process is given by: $$X_t = c + \varphi X_{t-1}+\varepsilon_t$$ where $\varepsilon_t$ is a white noise process with zero mean and constant variance $\sigma_\varepsilon^2$.

Then it is stated:

The process is wide-sense stationary if $|\varphi|<1$ since it is obtained as the output of a stable filter whose input is white noise.

I do not understand this last sentence. In particular, I do not know:

  • What a stable filter is.
  • How the process can be written as the output of a stable filter.

A partial answer, e.g. only to the first bullet, is appreciated as well.

Best Answer

A stable filter is a filter which exists, and is causal. Causal means that your current observation is a function of past or contemporaneous noise, not future noise. Why do they use the word stable? Well, intuitively, you can see what happens when you simulate data from the model if $|\phi| > 1$. You will see the process could not hover around some mean for all time.

If you rewrite your model as $$ X_t - \mu = \varphi(X_{t-1} - \mu) + \epsilon_t $$ with $c = \mu(1-\varphi)$, then you can re-write it again as $$ (1-\varphi B) Y_t = \epsilon_t \tag{1} $$ where $B$ is the backshift operator and $Y_t = X_t - \mu$ is the demeaned process.

A filter is a (possibly infinite) linear combination that you apply to to white noise (I take white noise to mean errors that are mutually uncorrelated and mean zero. This doesn't mean that they are independent, necessarily.). Filtering white noise is a natural way to form time series data. We would write filtered noise as $$ \psi(B)\epsilon_t = \left(\sum_{j=-\infty}^{\infty}\psi_j B^j\right)\epsilon_t = \sum_{j=-\infty}^{\infty}\psi_j \epsilon_{t-j}, $$ where the collection of coefficients $\{\psi_j\}$ is our impulse response function.

This only exists (has finite expectation and variance) if the coefficients far away get small fast enough. Usually they are assumed to be absolutely summable, that is $\sum_{j=-\infty}^{\infty} |\psi_j| < \infty$. Showing that this is a sufficient condition is a detail you might want to fill in yourself.

Getting $\psi(B)$ our filter from $\varphi(B)$ our model's AR polynomial is not always something you can do, though. If we could divide both sides of (1) by $(1-\varphi B)$, then your model is $$ Y_t = \sum_{j=-\infty}^{\infty}\psi_j \epsilon_{t-j}, $$ and this is just like doing simple algebra. We would do this, and then figure out what each $\psi_j$ was in terms of $\varphi$. You can only do this, however, if the roots of the complex polynomial $1 - \varphi z$ are not zero (otherwise you would be dividing by zero), or equivalently if $|\varphi|\neq1$ if you're writing the constraint in terms of the parameters instead of the complex number $z$. If moreover $|\varphi| < 1$, (or if you're stating it in terms of $z$ again, the roots are outside of the unit circle), then your model is causal, and you don't have to filter future noise: $$ Y_t = \sum_{j=0}^{\infty}\psi_j \epsilon_{t-j} = \sum_{j=0}^{\infty}\varphi^j \epsilon_{t-j}. $$ See how the sum representing the lag runs from $0$ to $\infty$ now?

Figuring out the coefficients of $\psi(B)$ in terms of $\phi$ can be done by solving $(1 + \psi_1 B + \psi_2 B^2 + \cdots)(1 - \varphi B) = 1$, and this might be something you want to do yourself.

Related Question