Variance-Covariance matrix for Weighted Least Squares

least squareslinear regressionmatricesvarianceweighted least squares

For ordinary least squares (OLS), the solution to the system $X\beta = y$ is

$\hat{\beta} = (X^T X)^{-1} X^T y$

and the variance on the solution parameters is

$Var(\hat{\beta}) = \sigma^2 (X^T X)^{-1}$

where the vector y denoted our observables and $\sigma$ are the errors on these observables.

If I instead would obtained the solution from the weighted least squares as

$\hat{\beta} = (X^T W X)^{-1} X^T W y$

where $W_{ii} = 1 / \sigma^2_i$, what would be the corresponding variance-covariance matrix of $\hat{\beta}$? Is it the same as in the OLS case?

Best Answer

In weighted least squares, you have $y=X\beta+\varepsilon$ where $\operatorname E(\varepsilon)=0$ and $\operatorname{Var}(\varepsilon)=\Omega=\operatorname{diag}(\sigma_1^2,\ldots,\sigma_n^2)$ is positive definite.

Weighted least squares estimator of $\beta$ is then $$\hat\beta_{\text{WLS}}=(X^T\Omega^{-1}X)^{-1}X^T\Omega^{-1}y=Py \quad(\text{say})$$

Therefore,

\begin{align} \operatorname{Var}\left(\hat\beta_{\text{WLS}}\right)&= P\cdot\operatorname{Var}(y)\cdot P^T \\&=(X^T\Omega^{-1}X)^{-1}X^T\Omega^{-1}\cdot\operatorname{Var}(y)\cdot((X^T\Omega^{-1}X)^{-1}X^T\Omega^{-1})^T \\&=(X^T\Omega^{-1}X)^{-1}X^T\Omega^{-1}\cdot\Omega\cdot\Omega^{-1}X(X^T\Omega^{-1}X)^{-1} \\&=(X^T\Omega^{-1}X)^{-1} \end{align}

You can see that this matches with $\operatorname{Var}\left(\hat\beta_{\text{OLS}}\right)$ when $\sigma_i^2=\sigma^2$ for each $i$.

Related Question