How to show that $y_t=x_t+\nu_t$, where $x_t$ is an $AR(p)$ process and $\nu_t$ is a white noise, follows an ARMA(p,p) process?
Say $x_t=\phi x_t + \epsilon_t$. Then replacing $x_t=y_t-\nu_t$, we get $y_t=\phi y_{t-1}+\nu_t-\phi \nu_{t-1} + \epsilon_t$. Doing this with an AR(p) process always yields the $\epsilon_t$ adding the MA(p) process.
Best Answer
You have $$y_t = x_t + v_t \tag{1} $$ and $$ \phi(B)x_t = e_t. $$ Applying $\phi(B)$ to both sides of (1) yields \begin{align} \phi(B)y_t &= \phi(B)x_t + \phi(B) v_t \\ &= e_t + \phi(B) v_t. \tag{2} \end{align} Consider the right hand side of (2). This is clearly a covariance stationary process. By the Wold decomposition theorem it must have a moving average representation. Since the autocovariance function cuts off for lags $k>p$ it must be a $MA(p)$ process, say $(1-\theta_1B-\dots-\theta_p B^p) u_t$. Hence, $y_t$ must be a $ARMA(p,p)$ process.
From the left hand side of (2), it is clear that its autoregressive parameters are equal to those of $x_t$. The moving average parameters $\theta_1,\theta_2,\dots,\theta_p$ and the white noise variance $\sigma_u^2$ of this $ARMA(p,p)$ process can be found by equating the autocovariance function of the right hand side of (2) with that of $\theta(B) u_t$ for lags $k=0,1,\dots,p$ and solving the $p+1$ resulting non-linear equations \begin{align} (1+\theta_1^2+\dots+\theta_p^2)\sigma_u^2 &= \sigma_e^2 + (1+\phi_1^2 +\dots +\phi_p^2)\sigma_v^2\\ (-\theta_1 + \theta_1\theta_2 +\dots+\theta_{p-1}\theta_p)\sigma_u^2 &= (-\phi_1 + \phi_1\phi_2 +\dots+\phi_{p-1}\phi_p)\sigma_v^2\\ &\vdots \tag{3} \\ (-\theta_{p-1} + \theta_1\theta_p)\sigma_u^2 &= (-\phi_{p-1} + \phi_1\phi_p)\sigma_v^2 \\ \theta_p \sigma_u^2&= \phi_p\sigma_v^2. \end{align}
Here is a R-function that solves these equations and returns the parameters of the $ARMA(p,p)$-model.
The following example checks that the autocovariance functions indeed are the same for a simple stationary AR(3) model and the computed ARMA(3,3) model: