You have
$$y_t = x_t + v_t \tag{1}
$$
and
$$
\phi(B)x_t = e_t.
$$
Applying $\phi(B)$ to both sides of (1) yields
\begin{align}
\phi(B)y_t &= \phi(B)x_t + \phi(B) v_t
\\ &= e_t + \phi(B) v_t. \tag{2}
\end{align}
Consider the right hand side of (2). This is clearly a covariance stationary process. By the Wold decomposition theorem it must have a moving average representation. Since the autocovariance function cuts off for lags $k>p$ it must be a $MA(p)$ process, say $(1-\theta_1B-\dots-\theta_p B^p) u_t$. Hence, $y_t$ must be a $ARMA(p,p)$ process.
From the left hand side of (2), it is clear that its autoregressive parameters are equal to those of $x_t$. The moving average parameters $\theta_1,\theta_2,\dots,\theta_p$ and the white noise variance $\sigma_u^2$ of this $ARMA(p,p)$ process can be found by equating the autocovariance function
of the right hand side of (2) with that of $\theta(B) u_t$ for lags $k=0,1,\dots,p$ and solving the $p+1$ resulting non-linear equations
\begin{align}
(1+\theta_1^2+\dots+\theta_p^2)\sigma_u^2 &= \sigma_e^2 + (1+\phi_1^2 +\dots +\phi_p^2)\sigma_v^2\\
(-\theta_1 + \theta_1\theta_2 +\dots+\theta_{p-1}\theta_p)\sigma_u^2 &= (-\phi_1 + \phi_1\phi_2 +\dots+\phi_{p-1}\phi_p)\sigma_v^2\\
&\vdots \tag{3} \\
(-\theta_{p-1} + \theta_1\theta_p)\sigma_u^2 &= (-\phi_{p-1} + \phi_1\phi_p)\sigma_v^2 \\
\theta_p \sigma_u^2&= \phi_p\sigma_v^2.
\end{align}
Here is a R-function that solves these equations and returns the parameters of the $ARMA(p,p)$-model.
arplusnoise2arma <- function(phi,se = 1,sv) {
p <- length(phi) # order of process
# autocovariance of right hand side
gamma0 <- ltsa:::tacvfARMA(theta=phi, maxLag = p,sigma2 = sv)
gamma0[1] <- gamma0[1] + se
# non-linear equations to solve resulting from equating autocov functions
f <- function(par) {
gamma1 <- ltsa::tacvfARMA(theta=par[1:p], maxLag = p, sigma2 = exp(par[p+1]))
gamma0-gamma1
}
# solve the non-linear system
fit <- rootSolve:::multiroot(f, c(phi,1), maxiter=1000, rtol=1e-12)
# parameters of the new ARMA, possibly non-invertible
theta <- fit$root[1:p]
sigma2 <- exp(fit$root[p+1])
# reparameterize the MA-part to make it invertible by moving roots outside unit circle
r <- 1/polyroot(c(1,-theta))
for (i in 1:p) {
if (Mod(r[i])>1) {
sigma2 <- sigma2*r[i]^2
r[i] <- 1/r[i]
}
}
sigma2 <- Re(sigma2)
# compute the new coefficients of the MA-polynomial
polycoef <- 1
for (i in 1:p)
polycoef <- c(polycoef,0) - r[i]*c(0,polycoef)
theta <- Re(-polycoef[-1])
# return the invertible ARMA(p,p) model
list(model=list(phi=phi,theta=theta,sigma2=sigma2),estim.precis=fit$estim.precis)
}
The following example checks that the autocovariance functions indeed are the same for a simple stationary AR(3) model and the computed ARMA(3,3) model:
> phi <- c(.2, -.1, .2)
> Mod(polyroot(c(1,-phi)))
[1] 1.678659 1.725853 1.725853
> result <- arplusnoise2arma(phi,1,.5)
> result
$model
$model$phi
[1] 0.2 -0.1 0.2
$model$theta
[1] 0.07286795 -0.04104890 0.06545496
$model$sigma2
[1] 1.527768
$estim.precis
[1] 4.176867e-14
> do.call(ltsa:::tacvfARMA, c(result$model, maxLag=10))
[1] 1.5793650794 0.1904761905 -0.0317460317 0.1904761905 0.0793650794 -0.0095238095
[7] 0.0282539683 0.0224761905 -0.0002349206 0.0033561905 0.0051899683
> ltsa:::tacvfARMA(phi=phi,theta=NULL,maxLag=10)
[1] 1.0793650794 0.1904761905 -0.0317460317 0.1904761905 0.0793650794 -0.0095238095
[7] 0.0282539683 0.0224761905 -0.0002349206 0.0033561905 0.0051899683
Since the process is weak stationary, we'll have $E[X_t]=E[X_{t-1}]$ by definition. So, we'll have $(1-\phi)E[X_t]=c+\theta E[\epsilon_{t-1}]+E[\epsilon_t]$. $E[\epsilon_t] = E[\epsilon_{t-1}] = 0$, as it is also given in your question statement; in the end, your answer is correct. So, To be able to find $E[X_t]$, we don't have to make the following statement: mean of ARMA(1,1) (if stationary) is equal to the mean of AR(1). This'd be ignoring the mean of MA terms. Since, it's zero-mean here, it seems as if they're equal in general.
Best Answer
Yes, you are right, but you mean an ARMA(1,1) or ARIMA(1,0,1). So you want to calculate the one-period forecasts of an ARMA(1,1), which is the same as an ARIMA(1,0,1).
the one-period forecast is given by: $\hat{x}_{t+1|t}=E(\phi_1x_t + \delta + \epsilon_{t+1} + \theta \epsilon_t|x_t,...,x_1)$ In your case it seems, that you have no constant term?
So this gives in your case
$\hat{x}_{t+1|t}=E(0.7 x_t + \epsilon_{t+1} + 0.8 \epsilon_t|x_t,...,x_1)$
Since $E(\epsilon_{t+1})=0$ this gives
$\hat{x}_{t+1|t}=0.7x_t+0.8\epsilon_t$
I am not sure: You mean you want to calculate the "variance" of the forecast to get prediction intervals? This is done by calculating the mean squared error. To do the following you have to know the principals of time series analysis, I just give a short summary and application in your case:
The forecast error is given by
$MSE(\hat{x}_{t+I|t})=E((x_{T+I}-\hat{x}_{T+I|T})^2)=(1+\Psi_1^2+...+\Psi_{I-1}^2)\sigma_\epsilon^2$
The $\Psi$ are obtained as follows:
$a(L)x_t=b(L)\epsilon_t$
In your case of an specific ARMA(1,1): $(1-0.7L)x_t=(1+0.0.8L)\epsilon_t$ This can be written as (look at a time series book):
$(1-0.7L)x_t * (1+ \Psi_1L+ \Psi_2L^2+...)=1+0.8L$
you can solve this and obtain the $\Psi$ values. This is not necessary in this case, since you only need the MSE of the one step ahead prediction. So $MSE(\hat{x}_{t+I|t})=E((x_{T+I}-\hat{x}_{T+I|T})^2)=(1+\Psi_1^2+...+\Psi_{I-1}^2)\sigma_\epsilon^2$ reduces in case of $I=1$ to $MSE(\hat{x}_{t+1|t})=\sigma_\epsilon^2$
the $\sigma_\epsilon^2$ can be obtained from your output in R/STATA or whatever you use.