I have a question on estimating a difference in differences model using Stata. As I understand this, also from other questions, when there are no covariates, estimating the diff in diff using a regular regression (including dummy for year of treatment, dummy for treatment, and interaction) gives the same results as estimating it using a fixed effect command such as Stata's xtreg. It actually is so when I do this with my data, but the standard errors are completely different: when is use Stata's command "reg" i get absolutely no significance, when I use xtreg I get instead a t-statistic of more than 2, with standard errors being almost 4 times smaller. Why is it so? And what does it suggest about the validity of the model and the command to use? What would be best to do when I am also adding covariates later?
Edit: I try to add an example from the code:
gen y07=1 if year==2017
replace jump=0 if jump!=1
gen did=y07*treat
xtset id year
xtreg y y07 did, fe r
Fixed-effects (within) regression Number of obs = 4,568
Group variable: id Number of groups = 2,284
R-sq: Obs per group:
within = 0.0131 min = 2
between = 0.0008 avg = 2.0
overall = 0.0011 max = 2
F(2,2283) = 12.73
corr(u_i, Xb) = 0.0069 Prob > F = 0.0000
(Std. Err. adjusted for 2,284 clusters in id)
------------------------------------------------------------------------------
| Robust
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y07 | .5117687 .1409194 3.63 0.000 .2354253 .7881121
did | .8282564 .4076776 2.03 0.042 .0287991 1.627714
_cons | 8.272329 .0809889 102.14 0.000 8.11351 8.431149
-------------+----------------------------------------------------------------
sigma_u | 18.188562
sigma_e | 5.4737922
rho | .91695247 (fraction of variance due to u_i)
reg y treat y07 did, r
Linear regression Number of obs = 4,568
F(3, 4564) = 1.80
Prob > F = 0.1441
R-squared = 0.0013
Root MSE = 18.597
------------------------------------------------------------------------------
| Robust
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
treat | .6513042 .7340444 0.89 0.375 -.7877781 2.090386
y07 | .5117687 .6775766 0.76 0.450 -.8166093 1.840147
did | .8282564 1.161073 0.71 0.476 -1.448009 3.104522
_cons | 8.045057 .4404064 18.27 0.000 7.181647 8.908467
------------------------------------------------------------------------------
` ` `
Of course I was imprecise in saying the standard error was four times smaller, it's slightly less than tree, but it's the same thing. Of course year the variable "treat" denotes being assigned to the treatment group.
Best Answer
You need to compare apples to apples, so use clustering with OLS and clustering with
xtreg, fe
(or robust withxtreg, fe
, which will default to clustering as Thomas pointed out). These coefficient equivalences are limited to two-period (one pre, and one post) datasets with treatment at the same time for all treated units.Here's an example of 2x2 DID on a public dataset demonstrating this. Here NJ restaurants are treated (become subject to the minimum wage increase) and PA restaurant are not. February '92 (t=0) is pre and November '92 is post (t=1). The DID parameter is the interaction of t = 1 and NJ = 1. The outcome fte is full-time equivalent employees. Here I will balance the panel in order to get
xtreg, fe
and OLS to give the same coefficient estimates. If the panel is unbalanced (consists of repeated cross-sections),xtreg, fe
will drop some observations that appear in only one year and the estimates will no longer match OLS or manual calculations. You may want to stick with clustered OLS if you have a repeated cross-section.Here is the result. Note that you can use factor variable notation to create the interactions rather than hard coding them.
Clustering in DID settings is a good idea for reasons outlined in Bertrand, Duflo, and Mullainathan's 2004 QJE paper. Clustering at the level of treatment is also a good idea, but here that is not feasible since there are not enough clusters (since treatment is a state law and we have data from two states only) for that to work well. Generally your SEs will go up when you cluster in DID, but if the errors are negatively correlated within cluster, they might shrink. See this post for the reasons why.
Code: