Solved – What can be inferred from “covariance matrix of residuals” and “correlation matrix of residuals” after VAR

correlationcovarianceresidualsvariancevector-autoregression

I have this VAR:

summary(VAR(V6CADModelSt45obs1D.df[,c(5,3,2,6,1,4)], p=5, type="none", ic="SC"))

The following is the result of this VAR:

**VAR Estimation Results:**  
Endogenous variables: FDI, GrowthRate, ExcRate1d, EnergyImp1d, CAD_GDP, openness1d 
Deterministic variables: none   
Sample size: 40   
Log Likelihood: -243.442   
Roots of the characteristic polynomial:  ALL LESS THAN 1

Call:
VAR(y = V6CADModelSt45obs1D.df[, c(5, 3, 2, 6, 1, 4)], p = 5, type = "none", ic = "SC")


Estimation results for equation FDI:    
Residual standard error: 1.776 on 10 degrees of freedom  
Multiple R-Squared: 0.9511, Adjusted R-squared: 0.8045   
F-statistic: 6.488 on 30 and 10 DF,  p-value: 0.001819   


Estimation results for equation GrowthRate:   
Residual standard error: 1.572 on 10 degrees of freedom  
Multiple R-Squared: 0.9859, Adjusted R-squared: 0.9436   
F-statistic: 23.32 on 30 and 10 DF,  p-value: 5.476e-06   


Estimation results for equation ExcRate1d:     
Residual standard error: 3.962 on 10 degrees of freedom  
Multiple R-Squared: 0.9797, Adjusted R-squared: 0.9187   
F-statistic: 16.07 on 30 and 10 DF,  p-value: 3.177e-05   


Estimation results for equation EnergyImp1d:   
Residual standard error: 0.8085 on 10 degrees of freedom  
Multiple R-Squared: 0.9945, Adjusted R-squared: 0.9778   
F-statistic: 59.82 on 30 and 10 DF,  p-value: 5.695e-08   


Estimation results for equation CAD_GDP:   
Residual standard error: 1.373 on 10 degrees of freedom  
Multiple R-Squared: 0.9929, Adjusted R-squared: 0.9718   
F-statistic:  46.9 on 30 and 10 DF,  p-value: 1.874e-07   


Estimation results for equation openness1d:   
Residual standard error: 2.105 on 10 degrees of freedom  
Multiple R-Squared: 0.9917, Adjusted R-squared: 0.967   
F-statistic: 40.02 on 30 and 10 DF,  p-value: 4.059e-07   


Covariance matrix of residuals:  
                FDI GrowthRate ExcRate1d EnergyImp1d  CAD_GDP openness1d  
FDI          3.1554    1.68047    0.6916      0.5061 -0.83198    -0.8250  
GrowthRate   1.6805    2.47067    2.9998      0.8613  0.03516     1.2176  
ExcRate1d    0.6916    2.99977   15.6964      0.5446  1.14479     0.3973  
EnergyImp1d  0.5061    0.86126    0.5446      0.6537  0.31013     0.8152  
CAD_GDP     -0.8320    0.03516    1.1448      0.3101  1.88538     1.5600  
openness1d  -0.8250    1.21764    0.3973      0.8152  1.56002     4.4295  

Correlation matrix of residuals:  
                 FDI GrowthRate ExcRate1d EnergyImp1d  CAD_GDP openness1d  
FDI          1.00000    0.60186   0.09827      0.3524 -0.34110   -0.22068  
GrowthRate   0.60186    1.00000   0.48171      0.6777  0.01629    0.36807  
ExcRate1d    0.09827    0.48171   1.00000      0.1700  0.21044    0.04765  
EnergyImp1d  0.35241    0.67771   0.17001      1.0000  0.27936    0.47909  
CAD_GDP     -0.34110    0.01629   0.21044      0.2794  1.00000    0.53983  
openness1d  -0.22068    0.36807   0.04765      0.4791  0.53983    1.00000  

What I thought:
I worked on the definition of Partial Autocorrelation, hence I think that when a variable of VAR is regressed over the remaining ones in a VAR equation, I consider that the residuals from that regression of VAR is orthogonal to all the remaining variables in the VAR, and hence eventually independent from them.

But, from this and other info and theorems, what can be inferred from the "covariance matrix of residuals" and "covariance matrix of residuals" after VAR?

What is the importance of "covariance matrix of residuals" and "covariance matrix of residuals" after VAR?
I notice that various softwares tabulate them. I gave a concrete example above so that the ones who know the theory can speak and explain in detail easily.

Any help will be greatly appreciated.

Best Answer

For simplicity consider a bivariate VAR(1) model with no intercept:

$$y_{1,t} = \beta_{11} y_{1,t-1} + \beta_{12} y_{2,t-1} + \epsilon_{1,t}$$ $$y_{2,t} = \beta_{21} y_{1,t-1} + \beta_{22} y_{2,t-1} + \epsilon_{2,t}$$

You may be interested in how the innovations $\epsilon_{1,t}$ and $\epsilon_{2,t}$ are related. If $\operatorname{corr}(\epsilon_{1,t}, \epsilon_{2,t})>0$, you would expect that at any given time the two innovations both being positive or both being negative is more likely than one of them being positive while the other negative.*

Given the VAR model coefficients and the error covariance matrix, the VAR system characterizes the joint conditional first and second moments of the dependent variables $y_{1,t}$ and $y_{2,t}$. (If you additionally assume normality, the VAR system characterizes not only the joint conditional first and second moments, but also the joint conditional distribution of the dependent variables.) Without the error covariance matrix, the marginal conditional distributions would be characterized but the joint distribution would not.

* This implication of correlation holds for symmetric distributions such as Normal or Student's $t$. For a more thorough treatment of correlation, see e.g. Rodgers & Nicewander "Thirteen Ways to Look at the Correlation Coefficient" (1988).

Related Question