Time Series – How to Generate Correlated ARMA(1,1) Models

armacross correlationtime series

I am interested in generating two ARMA(1,1) time series with a pre-defined cross-correlation p between x(t) and y(t).

If you could please provide the mathematical framework and the code in R for generating it would be perfect!

I have looked at the question Generate two correlated ARMA(1,1) processes which is not a duplicate, since that question (despite the title) looks for a given correlation between the errors, while I am looking for a given correlation between the series themselves.

Best Answer

I know that you want the correlation between the series themselves but a valid way (and to me the most natural way) to achieve it is by choosing the correlation between the disturbance terms. As I explain below, you can follow the idea given here, here, and here. All you need is the relationship between the parameters of the ARMA(1,1) models and the desired correlation between the series.

Let's define the data generating process as follows:

\begin{equation} x_t = \phi x_{t-1} + \epsilon_t + \theta \epsilon_{t-1} \,, \quad \epsilon_t \sim \hbox{NID}(0, \sigma^2_\epsilon) \\ y_t = \alpha y_{t-1} + \eta_t + \beta \eta_{t-1} \,, \quad \eta \sim \hbox{NID}(0, \sigma^2_\eta) \\ \hbox{Cov}(\epsilon_t, \eta_s) = \sigma \, \hbox{ if } t=s \hbox{ and zero otherwise.} \end{equation}

It can be checked that the relationship between the parameters of the ARMA(1,1) processes $x_t$ and $y_t$ and their correlation is given by:

\begin{equation} \rho = \frac{\gamma}{\sigma_x\sigma_y} \,, \hbox{ where } \gamma = \frac{\sigma (\phi\beta + 1 + \alpha\theta + \beta\theta)}{1 - \phi\alpha} \, \hbox{ and } \\ \sigma_x = \sqrt{\sigma^2_\epsilon\frac{1 + \theta^2 + 2\phi\theta}{1 - \phi^2}} \,, \sigma_y = \sqrt{\sigma^2_\eta\frac{1 + \beta^2 + 2\alpha\beta}{1 - \alpha^2}} \,. \end{equation}

Given the AR and MA coefficients and the variances of the disturbance terms, you can easily get the value of $\hbox{Cov}(\epsilon_t,\eta_t)=\sigma$ that involves a desired correlation between the series, $\rho^*$, as follows:

\begin{equation} \sigma = \rho^* \sigma_x \sigma_y (1 - \phi\alpha) / (\phi\beta + 1 + \alpha\theta + \beta\theta) \,. \end{equation}

Below I show an example to test this relationship between the parameters. Given the parameters defined below and a desired correlation between the series equal to 0.2, several series are generated from an ARMA(1,1) with correlated disturbances. The correlation between each pair of series is stored in object rho.

require("mvtnorm")
# parameters of the data generating process
desired.rho <- 0.2
Sigma <- diag(c(2,3))    
phi <- 0.7
theta <- 0.3
alpha <- -0.4
beta <- 0.2
# implied value of the covariance between the disturbance terms
sdx <- sqrt(Sigma[1,1] * (1 + theta^2 + 2*phi*theta) / (1 - phi^2))
sdy <- sqrt(Sigma[2,2] * (1 + beta^2 + 2*alpha*beta) / (1 - alpha^2))
scov <- desired.rho * sdx * sdy * (1 - phi*alpha) / (phi*beta + 1 + alpha*theta + beta*theta)
scov
# [1] 1.022579
Sigma[1,2] <- Sigma[2,1] <- scov
n <- 200
niter <- 10000
rhos <- rep(NA, niter)
set.seed(123)
for (i in seq_len(niter))
{
  eps <- mvtnorm::rmvnorm(n = n, mean = rep(0, 2), sigma = Sigma)
  x1 <- arima.sim(n = n, model = list(ar = phi, ma = theta), innov = eps[,1])
  x2 <- arima.sim(n = n, model = list(ar = alpha, ma = beta), innov = eps[,2])  
  rhos[i] <- cor(x1, x2)
}
summary(rhos)
#     Min.  1st Qu.   Median     Mean  3rd Qu.     Max. 
# -0.02344  0.16380  0.19970  0.19950  0.23590  0.39460 

We can see that the correlation between the series (not between the errors) is on average close to the value 0.2, as desired in this exercise. This correlation implies a covariance between the errors equal to 1.022579.

Edit

This is the derivation of the elements in the expression above. This make use of the following facts: the process is stationary; $x_t$ depends on past values of $\epsilon_t$ but not on future values of $\epsilon_t$; $\epsilon_t$ is independently distributed (same for $\eta_t$) and the covariance between concurrent values of $\epsilon_t$ and $\eta_t$ is $\sigma$.

\begin{eqnarray} \gamma &=& \hbox{E} \left[ (\phi x_{t-1} + \epsilon_t + \theta \epsilon_{t-1})(\alpha y_{t-1} + \eta_t + \beta \eta_{t-1}) \right] = \\ \gamma &=& \hbox{E}\left[ \phi\alpha x_{t-1}y_{t-1} \right] + \hbox{E}\left[ \phi x_{t-1}\eta_t \right] + \hbox{E}\left[ \phi\beta\eta_{t-1} \right] + \hbox{E}\left[ \alpha\epsilon_t y_{t-1} \right] + \hbox{E}\left[ \epsilon_t\eta_t \right] + \\ &+& \hbox{E}\left[ \beta\epsilon_t\eta_{t-1} \right] + \hbox{E}\left[ \alpha\theta \epsilon_{t-1}y_{t-1} \right] + \hbox{E}\left[ \theta\epsilon_{t-1}\eta_t\right] + \hbox{E}\left[ \beta\theta\epsilon_{t-1}\eta_{t-1} \right] \\ \gamma &=& \phi\alpha\gamma + 0 + \phi\beta\sigma + 0 + \sigma + 0 + \alpha\theta\sigma + 0 + \beta\theta\sigma \\ (1 - \phi\alpha)\gamma &=& \sigma (\phi\beta + 1 + \alpha\theta + \beta\theta) \\ \gamma &=& \frac{\sigma (\phi\beta + 1 + \alpha\theta + \beta\theta)}{(1 - \phi\alpha)} \end{eqnarray}

\begin{eqnarray} \hbox{Var}(x_t) = \sigma^2_x &=& \hbox{E}\left[ (\phi x_{t-1} + \epsilon_t + \theta \epsilon_{t-1})^2 \right] \\ \sigma^2_x &=& \phi^2 \hbox{E}\left[ x_{t-1}^2 \right] + 2\phi\hbox{E}\left[ x_{t-1}\epsilon_t \right] + 2\phi\theta\hbox{E}\left[ x_{t-1}\epsilon_{t-1} \right] + \hbox{E}\left[ \epsilon_t^2 \right] + \\ &+& 2\theta \hbox{E}\left[ \epsilon_t\epsilon_{t-1} \right] + \theta^2 \hbox{E}\left[ \epsilon_{t-1}^2 \right] \\ \sigma^2_x &=& \phi^2 \sigma^2_x + 0 + 2\phi\theta\sigma^2_\epsilon + \sigma^2_\epsilon + 0 + \theta^2\sigma^2_\epsilon \\ (1 - \phi^2) \sigma^2_x &=& \sigma^2_\epsilon (1 + \theta^2 + 2\phi\theta) \\ \sigma^2_x &=& \frac{\sigma^2_\epsilon (1 + \theta^2 + 2\phi\theta)}{(1 - \phi^2)} \end{eqnarray}

Related Question