Solved – Variance of restricted least squares estimator

least squaresrestrictions

I'm working through the derivation of the restricted least squares estimator, and the final step of the difference in variance between the unrestricted and restricted estimators gives me:

$$Var(\beta^U) – Var(\beta^R) = \sigma^2 (X^TX)^{-1} M$$
where $M$ is the idempotent matrix,
$$M = I – (X^TX)^{-1}R^T(R(X^TX)^{-1}R^T)^{-1}R$$
and $R$ is a matrix of linear restrictions.

I understand that $M$ is positive semi-definite, but is there anything guaranteeing that the product is also positive semi-definite? (And hence, the variance of the unrestricted estimator is greater than the restricted estimator.)

Best Answer

Intuitive solution: Because constraining always hurts one is not that flexible like in unrestricted cases. Hence the variance of the restricted model could be at most as high as the variance of the unrestricted model.

Mathematical: The product of characteristics X (X'X) is kind of sum of squares and is similar to the covariance matrix of characteristics X (is not exactly the covariance matrix). This matrix is positive definite. In more deep multiplying (X'X)^(-1) with sigma^2 is the variance- covariance matrix of the Least Squares estimator which is also positive definite because sigma^2 is the variance of the true residuals and is always positive.

Hence it is guaranteed that the product is also positive semi-definite. For this reason the variance of the unrestricted estimator is greater or equal than the restricted estimator.

Related Question