Solved – Simplification in proof of OLS inconsistency

asymptoticsconsistencyleast squarespanel data

I'm a little confused right now regarding the LLN "jump" from probability limits to expectations and variances/covariances:

Say we have a linear regression model of the form with $S$ observations:

$$ y = X\beta + \epsilon. $$

Thus,

$$ \hat{\beta}_{OLS} = (X'X)^{-1}X'y$$

and

$$
\text{plim} \hat{\beta}_{OLS} = \beta + \text{plim} \left(\frac{X'X}{S}\right)^{-1}\frac{X'\epsilon}{S} = \beta + E(X'X)^{-1}E(X'\epsilon). \ \ \ \ \ \ \ (1)$$

Assuming $X \in F^{S\times1}$, i.e. a column vector, and the model was demeaned, the term simplifies to

$$ \hat{\beta}_{OLS} = \beta + \frac{Cov(x_s,\epsilon)}{Var(x_s)}. \ \ \ \ \ \ \ (2) $$

My question now is the following:

  • if I cannot demean my model (because I'm in a panel setting and have to show inconsistency of OLS first), I cannot write it in a form like (2) but have to stick to the expectation form (i.e. (1)), right?

Best Answer

You can. By using partitioned-regression results (F-W-L theorem), if the model includes a constant term,

$$y = \alpha + \mathbf x' \beta + u$$

then the "slope coefficients" are in practice calculated by OLS as

$$\hat{\beta}_{OLS} = (\hat {\tilde X}'\hat {\tilde X})^{-1}\hat {\tilde X}'\hat {\tilde y}= \beta + (\hat {\tilde X}'\hat {\tilde X})^{-1}\hat {\tilde X}'u$$

Where $\hat {\tilde X} = X - \bar X$ (sample mean) and for later use $ {\tilde X} = X-E(X)$. Again, here $X$ does not include a series of ones (but the model does). In other words, whether you demean a priori or not, OLS will demean the variables automatically to estimate the slope coefficients, if you also include a constant term in the model. If you multiply and divide the above by sample size, you get the multivariate analog of $(2)$ since the variables are now in sample mean-deviation form and so these are estimated variance and covariance matrices (and in $(2)$ you should use hats, by the way).

Then

$$\text{plim}\big( \hat{\beta}_{OLS} -\beta\big) = \text{plim}\left (\frac 1n\hat {\tilde X}'\hat {\tilde X}\right)^{-1}\text{plim}\left(\frac 1n \hat {\tilde X}'u\right)$$

The Law of Large Numbers does not "jump from probability limits to expectations": it is the very essence of the Law that the probability limit is the expected value. Then (under the necessary conditions)

$$\text{plim}\big( \hat{\beta}_{OLS} -\beta\big) = E\left (\frac 1n\tilde X'\tilde X\right)^{-1}E\left(\frac 1n \tilde X'u\right)$$

Now the variables are in deviations from their true expected values, and these expected values are the true covariance matrices, and you can write for example,

$$\text{plim}\big( \hat{\beta}_{OLS} -\beta\big) = [ \text {Var}(\mathbf x)]^{-1}\cdot \text{Cov}(\mathbf x \cdot u)$$

which is the probability limit of the multivariate analogue of $(2)$.

Related Question