Solved – GMM estimation of linear regression with intercept restriction

distributionsgeneralized-momentsinterceptregression

Say I have a time series regression as follows:
$$y_t = a_i + \beta_i x_t + \varepsilon_t^i \ \ ; \ \ t = 1, 2, \cdots, T \ \ \text{for each } i$$
Now say I impose the following restriction on the intercept, $a_i$:
$$a_i = \beta_i[\lambda – E(x)]$$
where $\lambda$ is a constant and $E(\cdot)$ denotes the expectation.

How can I use GMM to write down a set of moment conditions that I can use to estimate this model and test the restriction on $a_i$?

Attempt: I know how to do this if there was no restrictions on $a_i$.

Let $b$ denote the vector of parameters, i.e., $b = [a \ \ \beta]'$. Then we know from GMM theory that $Var(\widehat{b}) = \frac{1}{T}d^{-1}Sd^{-1 \prime}$ where $d = \frac{\partial g_T(b)}{\partial b'}$ and $g_T(b)$ denotes the sample moment conditions, i.e.,
$$g_T(b) = \begin{bmatrix} E_T(y_t – a_i – \beta_i x_t) \\ E_T(y_t – a_i – \beta_i x_t)x_t \end{bmatrix} = E_T\left(\begin{bmatrix}\varepsilon_t \\ x_t \varepsilon_t \end{bmatrix}\right)$$ where $E_T(\cdot) = \frac{1}{T}\sum_{t=1}^{T} (\cdot)$.

$S$ is given by:
$$S = \sum_{j=-\infty}^{\infty}\begin{bmatrix} E(\varepsilon_t \varepsilon_{t-j}') & E(\varepsilon_t \varepsilon_{t-j}' x_{t-j}) \\ E(x_t \varepsilon_t \varepsilon_{t-j}') & E(x_t \varepsilon_t \varepsilon_{t-j}' x_{t-j}) \end{bmatrix}$$
which can be simplified further assuming the errors are uncorrelated and not heteroskedastic.

Now once there are restrictions on $a_i$, how do I proceed?

Best Answer

If your function looked like: $$a_i=\beta_i\lambda$$ or you perfectly knew $E(x)$ all you would need to do is rewrite your moment restrictions with the inclusion of the restriction. That is:

$$ g_T(b)=E_T\left[\matrix{ y_{i,t} - \beta_{i}\lambda - \beta_{i}x_{i,t}\\ (y_{i,t}-\beta_{i,t}\lambda-\beta_{i}x_{i,t})x_{i,t} } \right] $$

(actually I'd write this out in terms of individual specific moments if $\beta_i$ is truly an individual level variable, but that's a different point).

You then run this through your favorite optimization routine, with the weight matrix you described. The problem with:

$$a_i=\beta_i [\lambda-E(x)]$$

is that you probably do not know $E(x)$, so you have to account for sampling variation. What you should do is define a parameter $\mu=E(x)$, and rewrite your moments:

$$ g_T(b)=E_T\left[\matrix{ y_{i,t} - \beta_{i}[\lambda-\mu] - \beta_{i}x_{i,t}\\ (y_{i,t}-\beta_{i}[\lambda-\mu]-\beta_{i}x_{i,t})x_{i,t}\\ x_{i,t}-\mu } \right]$$

with a corresponding weight matrix. You can derive a weight matrix for this, or (my preferred option) use iterated GMM.

This is a just-identified estimator. You can come up with over-identified estimators for this problem by using first-differences if you want.