Solved – Best OLS estimators

covariance-matrixunbiased-estimator

Hi i am stuck on this one, the question is related to Gauss-Markov theorem:

Consider a general alternative to the OLS estimator that is also a linear unbiased estimator,
say ${\tilde \beta}$. Outline a proof that the OLS estimator b is better, in a well defined sense, than ${\tilde \beta}$.

For both estimators b and ${\tilde \beta}$ i have to use the fact that $E(UU') = \sigma^2I$.

I have shown already that $ b= \left(\mathbf {X}'\mathbf {X}\right)^{-1}\mathbf {X}'\mathbf {y} = \left(\mathbf {X'}\mathbf {X}\right)^{-1}\mathbf X'(\mathbf {X}\mathbf {\beta}+u)$

$=\left(\mathbf {X'}\mathbf {X}\right)^{-1}\mathbf X'\mathbf {X}\mathbf \beta+(X'X)^{-1}X'u$

$=\beta+(X'X)^{-1}X'u$

Theorem proved for linearity, now

$E(b)= E[ \beta+(X'X)^{-1}X'u]=\beta $ as $E[u]=0$

Theorem proved for Unbiasedness.

Variance-covariance matrix of b: $Var(b)=E(b-\beta)(b-\beta)'$ using again $E(UU') = \sigma^2I$

$=E(b-\beta)(b-\beta)'=\sigma^2(X'X)^{-1}$

in order to show that b is a better estimator that ${\tilde \beta}$ i need to follow the same reasoning still using the fact that $E(UU') = \sigma^2I$
and then conclude that b has the minimum variance of all estimators thus it is the best, that is

$Var(b)\leq Var(\tilde \beta)$ i guess.

I am not sure how to properly compare the two estimators so if someone could give me a hint it would be much appreciated.

Best Answer

We can compute the variance-covariance matrice of b and $\tilde \beta$ and hence compare their variances in order to tell which one has the smallest variance. (Best estimator)

Knowing tha $E(U)=0$

$E(b-\beta)(b-\beta)'= E(\beta+(X'X)^{-1}X'u-\beta)(\beta+(X'X)^{-1}X'u-\beta)'$

$=E[(X'X)^{-1}X'u][(X'X)^{-1}X'u]$

$=E[(X'X)^{-1}X'uu'X(X'X)^{-1})$ recall that $E(uu')= \sigma^2I$

$=\sigma^2(X'X)^{-1}X'IX(X'X)^{-1}$

$=\sigma^2(X'X)^{-1}$

So it has been shown that b is unbiased as $E(b_i)=\beta_i$ and therefore $E(b)=\beta$ but also linear as $b=(X'X)^{-1}X'y$, $b=Ay$ with $A=(X'X)^{-1}X'$

Let $\tilde \beta$ a general linear unbiased estimator, $\tilde \beta\equiv(A+C)y=[(X'X)^{-1}X'+C)y$

$=[(X'X)^{-1}X'+C)(X\beta+u)$

$=\beta+(X'X)^{-1}X'u+CX\beta+Cu$, we require $CX=0$ $\forall$ $X$ for unbiasedness

$E(\tilde \beta)=E[\beta+(X'X)^{-1}Xu + CX\beta+Cu]$ with $CX=0$ required and $E(u)=0$

$=E(\beta)$

Hence $\tilde \beta$ is also unbiased and $\tilde \beta=b+Cy$ is linear

Variance-covariance matrix of $\tilde \beta$: $E(\tilde \beta-\beta)(\tilde \beta-\beta)'=E[(X'X)^{-1}X'u+Cu)[(X'X)^{-1}Xu+Cu)']$

$=E[(X'X)^{-1}X'uu'X(X'X)^{-1}+(X'X)^{-1}X'uu'C']+Cuu'X(X'X)^{-1}+Cuu'C'$

using the fact that $E(uu')=\sigma^2I$ which simplifies to

$E(\tilde \beta-\beta)(\tilde \beta-\beta)'= \sigma^2(X'X)^{-1}+\sigma^2(X'X)^{-1}X'C'+\sigma^2CX(X'X)^{-1}+\sigma^2CC'$

we know that $CX=0$ hence $(CX)'=X'C'=0$ so we have

$E(\tilde \beta-\beta)(\tilde \beta-\beta)'=\sigma^2(X'X)^{-1}+\sigma^2CC'$

$=\sigma^2[(X'X)^{-1}+CC']$ , Recall that $E(b-\beta)(b -\beta)'=\sigma^2(X'X)^{-1}$

Thus $E(\tilde \beta-\beta)(\tilde \beta-\beta)'=E(b-\beta)(b -\beta)'+ \sigma^2CC'$

We can conclude that $Var(b)\leq Var(\tilde \beta)$ which means that b is the best estimator.