When two variables are cointegrated then it suggests that there should be Granger causality in at least one direction (X causes Y or Y causes X or both).
Granger causality doesn't give instantaneous causality. X is said to Granger cause Y if lagged values of X is helpful in predicting Y above and beyond the information contained in lagged values of Y alone.
Not necessarily.
[W]hat is this Granger test for and how to interpret it?
Basically, Granger causality $x \xrightarrow{Granger} y$ exists when using lags of $x$ next to the lags of $y$ for forecasting $y$ delivers better forecast accuracy than using only the lags of $y$ (without the lags of $x$).
You can find definitions and details in Wikipedia and in free textbooks and lecture notes online. There are also many examples on this site, just check the threads tagged with granger-causality.
It says in the results that the null hypothesis is "H0: e do not Granger-cause prod rw U", does that mean it is testing whether e Granger causes prod, rw, U all at the same time with one p-value?
You are right. Note that in a 4-variable VAR(2) model, testing whether one variables does not cause the other three amounts to testing $3 \times 2$ zero restrictions (three variables times two lags), and that is also what the test summary shows: df1=6
.
When using grangertest()
in R
, one always needs to specify both a cause and the dependent variable, so it is not entirely intuitive for me how causality()
works.
This is because in a $K$-variate system with $K>2$ there are many possible causal links. $x_i$ may cause $x_j$; $x_i$ may cause $x_j$ and $x_k$; $x_i$ and $x_j$ may cause $x_k$; etc. So the function requires you to specify precisely which causal link you want to examine.
Best Answer
I was looking for the answer to this same question and I found it on the book Introduction to Modern Time Series Analysis (second edition) by Gebhard Kirchgassner, Jurgen Wolters and Uwe Hassler on page 97.
Granger Causality: x granger causes y if a model that uses current and past values of x and current and past values of y to predict future values of y has smaller forecast error than a model than only uses current and past values of y to predict y. In other words, Granger causality answers the following question: does the past of variable x help improve the prediction of future values of y?
Instantaneous Causality: x instantaneously Granger causes y if a model that uses current, past and future values of x and current and past values of y to predict y has smaller forecast error than a model than only uses current and past values of x and current and past values of y. In other words, Instantaneous granger causality answers the question: does knowing the future of x help me better predict the future of y? If I know that x is going to do, does it help me know what y is going to know?
I know this is an old question, but I thought I would answer it in case someone else is struggling as I was with this.
The book goes deeply into the math of these two metrics, so please take a look at it if you want a more formal answer.