Impulse Response Function of VAR(1) – Calculation with Example

econometricsimpulse responsetime seriesvector-autoregression

How to calculate:

1) Simple IRF

2) Orthological IRF (Y2 -> Y1)

Of an unrestricted VAR(1) model:

$Y_{1, t} = A_{11}Y_{1, t-1} + A_{12} Y_{2, t-1} + e_{1,t}$
,

$Y_{2, t} = A_{21}Y_{1, t-1} + A_{22} Y_{2, t-1}+e_{2,t}$

Let's just say that $A_{11} = 0.8$, $A_{12} = 0.4$,
$A_{21} = -0.3$, $A_{22} = 1.2$

And the shock size is 1 to both residuals. You don't have to use the provided values as long as the point gets across. Let's also say that the IRF length is 4. I think this should be enough info but let me know if something else is needed.

Bonus question: How does the response change in a structural VAR (any structure)?

Best Answer

For a VAR(1), we write the model as $$ y_t=\Pi y_{t-1}+\epsilon_t $$ where $y$ and $\epsilon$ are $p\times 1$ vectors. If you have more lags, the idea of extension is the same (and it is particularly straight-forward using the companion form).

The impulse response is the derivative with respect to the shocks. So the impulse response at horizon $h$ of the variables to an exogenous shock to variable $j$ is $$ \frac{\partial y_{t+h}}{\partial \epsilon_{j, t}}=\frac{\partial }{\partial \epsilon_{j, t}}\left(\Pi y_{t+h-1}+\epsilon_{t+h-1}\right)=\cdots=\frac{\partial }{\partial \epsilon_{j, t}}\left(\Pi^{h+1} y_{t}+\sum_{i=0}^h\Pi^i\epsilon_{t+h-i}\right). $$ This derivative will eliminate all terms but one, namely the term in the sum which is $\Pi^h\epsilon_t$, for which we get $$ \frac{\partial y_{t+h}}{\partial \epsilon_{j, t}}=\frac{\partial }{\partial \epsilon_{j, t}}\left(\Pi^{h+1} y_{t}+\sum_{i=0}^h\Pi^i\epsilon_{t+h-i}\right)=\frac{\partial }{\partial \epsilon_{j, t}}\Pi^h\epsilon_{t}=\Pi^he_j $$ where $e_j$ is the $j$th row of the $p\times p$ identity matrix. That is, the response of all $p$ variables at horizon $h$ to a shock to variable $j$ is the $j$th column of $\Pi^h$. If you take the derivative with respect to the matrix $\epsilon_t$ instead, the result will be a matrix which is just $\Pi^h$, since the selection vectors all taken together will give you the identity matrix.

That is the non-orthogonalized case without identification, which I believe is not so common in the literature. What people usually use is either some sophisticated identification scheme, or more often a Cholesky decomposition. To study this, it is more convenient to work with the vector moving average form of the model (which exists if it is stationary) $$ y_t=\sum_{s=0}^\infty\Psi_s\epsilon_{t-s}. $$ The problem for interpretation is when the error terms are correlated, because then an exogenous shock to variable $j$ is simultaneously correlated with a shock to variable $k$, for example. To eliminate this, you can use a Cholesky decomposition which orthogonalizes the innovations. Let's suppose that the covariance matrix of the errors is $\Omega$. We decompose it as $\Omega=PP'$ and introduce $v_t=P^{-1}\epsilon_t$ which are error terms with the identity matrix as covariance matrix. Do some manipulation: $$ y_t=\sum_{s=0}^\infty\Psi_s\epsilon_{t-s}=\sum_{s=0}^\infty\Psi_sPP^{-1}\epsilon_{t-s}=\sum_{s=0}^\infty\Psi_s^*v_{t-s}. $$ where $\Psi_s^*=\Psi_sP$. Consider now the response to an orthogonalized shock: $$ \frac{\partial y_{t+h}}{\partial v_{j, t}}=\frac{\partial }{\partial v_{j, t}}\left(\sum_{s=0}^\infty\Psi_s^*v_{t+h-s}\right)=\Psi_h^*e_j. $$

To calculate this in practice, you will need to find the moving average matrices $\Psi$. This you do recursively. If you have $K$ lags: $$ \Psi_s=0, \quad (s=-K+1, -K+2, \dots, -1)\\ \Psi_0=I\\ \Psi_s=\sum_{i=1}^K\Pi_i\Psi_{s-i}, \quad (s=1, 2, \dots). $$ With estimates, you just put hats on the $\Pi$ matrices and proceed. $P$ we find from using a Cholesky decomposition of the estimated error covariance matrix, $\hat\Omega$.


Edit: In univariate time series analysis, one standard result is that every AR process can be written as an MA($\infty$) process. You have the same result for multivariate time series, meaning that we can always rewrite a stationary VAR($p$) as a VMA($\infty$). This is central to impulse response analysis. The case with only one lag is the easiest. In this case, we may write $$ y_t=\Pi y_{t-1}+\epsilon_t=\Pi(\Pi y_{t-2}+\epsilon_{t-1})+\epsilon_t=\cdots=\sum_{s=0}^\infty \Pi^i\epsilon_{t-s}. $$ The implied steps in the $\cdots$ part might not be obvious, but there is just a repeated substitution going on using the recursive nature of the model. So for the VAR(1), the moving average coefficients $\Psi_s$ are just $\Psi_s=\Pi^s$. For more lags, it gets a little more complicated, but above you will find the recursive relations.

In impulse response analysis, the moving average form of the model is particularly convenient. The reason is that if you want to find the response of $y_{t+h}$ to a shock to $\epsilon_{j, t}$, then if you start with the usual VAR(1) form $$ y_{t+h}=\Pi y_{t+h-1}+\epsilon_{t+h}, $$ then there is no $\epsilon_t$ in your model as it stands, but you will have to do recursive substitution until you get to it (as I did in the beginning). But, if you have the moving average form of the model, you have it immediately on the right hand side. So for the VAR(1), you will find that $$ \frac{\partial y_{t+h}}{\partial \epsilon_{j, t}}=\frac{\partial}{\partial \epsilon_{j, t}}\left(\sum_{s=0}^\infty\Psi_s\epsilon_{t+h-s}\right)=\Psi_he_j=\Pi^he_j, $$ where $e_j$ again is the $j$th column of the $p\times p$ identity matrix. As you see, this is the same result as we found in the beginning, but here we used the moving average form of the model to do it. But the two representations are just two sides of the same coin.