Time Series – Finding the Inverse of Finite Polynomial in ARMA(p,q) Models

arimapolynomialtime series

On learning about ARMA(p,q) models, Box and Jenkins (1970) defined a very important class of stochastic processes that is obtained as a white noise process goes through a linear filter.

This can be written as:

$\ w_t = \sum_{k=0}^{\infty} \Psi_k . a_{t-k}$, [1]

where$\ w_t$ is the stochastic process, $\Psi_k$ is a linear filter, and $\ a_{t-k}$ is a white noise process.

Box and Jenkins defined the linear filter as

$\Psi_k = {\large \frac{\theta_q(B)}{\phi_p(B)}}$, [2]

where $\theta_q(.)$ and $\phi_p(.)$ are polynomials of the type $\ P(x) = 1 – c_1.x – c_2.x^2 – … – c_k.x^k$, $\ B$ is the lag operator and a variable of the both aforementioned polynomials.

One can write [1] as

$\ w_t = \theta_q(B) . \phi^{-1}(B).a_t$ [3]

About equation [3], I read:

It's evident that when $\phi_p(B)$ is inverted to obtain $\ w_t$ as a function of $\ a_t$, there will be a polynomial up to an infinite degree multiplying $a_t$ since $\phi_p(B)$ is a finite polynomial.

My question is: Is there a law that says that an inverse of a finite-degree polynomial will always be an inifinite-degree polynomial?

I have looked for such proof, but so far to no avail.
Thank you for your feedback.

Best Answer

The law you are looking for is the infinite geometric sum:

$$\sum_{t=0}^\infty r^t = \frac{1}{1-r} = (1-r)^{-1} \quad \quad \text{for }|r|<1.$$

This law shows that the inverse of a polynomial of degree one (an affine function) can be written as an infinite degree polynomial. To apply this to the inversion of an autoregressive characteristic polynomial of arbitrary finite degree, you first write the finite polynomial in its factorised form:

$$\phi_p(x) = \prod_{i=1}^p \Big( 1-\frac{x}{r_i} \Big),$$

where $r_1, ..., r_p$ are the roots of the polynomial. Now the polynomial is written as a product of first order polynomials (i.e., affine functions). Over the polynomial argument range $|x| < \min |r_i|$, you have $|x/r_i| < 1$ for all $i=1,...,p$, which allows the following inversion:

$$\phi_p(x)^{-1} = \prod_{i=1}^p \Big( 1-\frac{x}{r_i} \Big)^{-1} = \prod_{i=1}^p \Big( \sum_{t=0}^\infty (x/r_i)^t \Big) = \sum_{t=0}^\infty \kappa_t x^t.$$

This form is an infinite polynomial, with coefficients $\kappa_t$ that decrease to zero as $t \rightarrow \infty$. Note that this is a general result that occurs whenever you are dealing with inversion of polynomials; it is a result that occurs in areas of mathematics outside of time-series analysis, though one particular application is in the context of ARMA models.

Within the context of ARMA models, we use the backshift operator as a polynomial argument, rather than using a fixed real number. Understanding how this operator functions requires a knowledge of function-theory, which looks at operators as mappings on a function-space. Without going into the details of this, the insight that is relevant for our purposes is that the backshift operator behaves like the number one in the argument, and it obeys the invertability property above (for proof, see e.g., Kasparis 2016). Thus, when you are inverting the polynomial with the backshift operator, you need $|r_i|>1$ for all $i=1,...,p$, and you then have:

$$\phi_p(B)^{-1} = \sum_{t=0}^\infty \kappa_t B^t.$$

Related Question