Solved – Strict exogeneity and lagged variables

exogeneitytime series

I am confused why strict exogeneity must be violated when we have lagged time series variables. My understanding of strict exogeneity is that a variable must be uncorrelated with error terms in all periods. But isn't exogeneity always a necessary assumption for estimation? If $x_t$ and $u_t$ are uncorrelated, and $x_{t-1}$ and $u_{t-1}$ are uncorrelated, how would it violate strict exogeneity if we have a specification with $x_t$ and $x_{t-1}$?

Best Answer

In the most cases it is assumed that $E[\epsilon_t]=0$. Then, strict exogeneity implies that the regressors are orthogonal to the error term for all observations $s$, i. e. $E[x_s \epsilon_t]=0$. For some time series models this is violated. Consider the AR(1) model $ \ y_t=\beta y_{t-1}+ \epsilon_t \ $ with $ \ \epsilon_t \sim N(0, \sigma^2) \ $ $ \ \forall \ $ $t$. Since you regress $y_t$ on $y_{t-1}$ the error term $\epsilon_t$ is orthogonal to $y_{t-1}$, i. e. $E[y_{t-1} \epsilon_t]=0$.

However, strict exogeneity requires $y_t$ to be orthogonal to $all$ $\epsilon_t$. That does not hold for the considered model - as will be shown:

$E[y_t \epsilon_t]=E[(\beta y_{t-1}+ \epsilon_t)\epsilon_t] \qquad (by \ \ \ y_t=\beta y_{t-1}+ \epsilon_t)$
$ \quad \qquad =\beta E[y_{t-1} \epsilon_t]+E[\epsilon_t^2]$
$ \quad \qquad =E[\epsilon_t^2] \qquad \qquad \qquad \quad (by \ \ \ E[y_{t-1} \epsilon_t]=0)$. $ \quad \qquad =\sigma^2 \qquad \qquad \qquad \quad \quad (by \quad \epsilon_t \sim N(0, \sigma^2))$.

Therefore, $y_t$ is not orthogonal to all error terms but the regressor for $y_{t+1}$. Thus, strict exogeneity is violated.

This implies, there is only strict exogeneity if $\epsilon_t = 0$ for all $t$.

Related Question