Solved – Estimating VAR by GLS versus OLS: efficiency

covariance-matrixefficiencyestimationgeneralized-least-squaresvector-autoregression

Suppose I have a VAR model with different regressors in different equations (this could be due to restricting some coefficients of a full VAR($p$) model to zero or having some different exogenous regressors in different equations).
Suppose also that the model is correctly specified.
I will be interested in asymptotic results (small sample properties may be more complicated, so I leave them aside for simplicity).

Such a model can be estimated by GLS or by equation-by-equation OLS.
Both GLS and OLS estimators will be unbiased.
GLS estimator will be more efficient than equation-by-equation OLS estimator. I understand this means that the variance-covariance matrix of the GLS coefficient vector will be "smaller" than that of OLS (the difference of the two matrices will be a negative semidefinite matrix).

Question: will all the diagonal elements of the variance-covariance matrix be smaller in the GLS case? In other words, will each and every coefficient have a smaller variance in GLS case?

Here is a related question comparing OLS and GLS.

Best Answer

Yes. Let the GLS covariance matrix be $V_G$ and the OLS covariance matrix $V_O$. Then, as you point out, GLS being efficient means that $V_O-V_G$ is positive semi-definite. This means that for any non-zero vector $z$, $z'(V_O-V_G)z\geq0$. Let's suppose that the dimension of the covariance matrices is $K\times K$. Then, if we choose $z$ to be $e_i$, where $e_i$ is the $i$th column of the $K\times K$ identity matrix, it follows that $$ e_i'(V_O-V_G)e_i\geq 0, $$ or, in words, every diagonal element of $V_O$ is greater than (or equal to) that of $V_G$.

Related Question