You could approach this problem using probit models, and once you've figured out if there's an issue and how it should be handle, then you could do equivalent logistics for ease of interpretation if you didn't want to stick with probit - they are essentially the same model in many ways, but there are some options with probit that relate to your question.
I believe you could fit your model with something like xtgee or oglm to get a first model. Then you can fit a heteroskedastic probit (oglm or a similar command). Once you have both models, since the probit model is nested within the het prob model, you can then do an LR test of nested models to see if there is an improvement in fit when using the heteroskedastic model.
I've read a surprising amount of "ignore it" regarding heteroscedasticity and binary outcomes. That seems like a bad idea, particularly with a lot of corrections available. Various robust options are available in Stata commands that address some related issues and are explained well in the Stata documentation.
I'd say I'm slightly past beginner status with this level of detail on advanced models - which translates to "use my advice as a good starting point." I might be able to come up with something better given more information about your data.
Here are some places where you could do some digging based on what you already know and what little bit of direction I've offered -
http://www3.nd.edu/~rwilliam/oglm/oglm_Stata.pdf - pretty in depth discussion and explains things using reference to a specific Stata command.
Allison, Paul. 1999. Comparing Logit and Probit Coefficients Across Groups. Sociological
Methods and Research 28(2): 186-208.
Yatchew, Adonis and Zvi Griliches. Specification Error in Probit Models. 1985. The Review of Economics and Statistics 67(1):134-139.
Hope this helps.
No, that's not a problem.
If you have an intercept term in your model, one of the state dummies will be dropped, and the others then give the state means relative to the omitted state. In this case, statistically significant state dummies just mean that those states have means that are statistically significantly different from the omitted state. I can't see how this would be a problem.
If you instead have dropped your intercept term, and therefore can include dummies for all states, the state dummies simply capture the mean for each state. In this case, statistically significant dummies just mean that those states have means that are statistically significantly different from zero. I can't see how this would be a problem, either.
Best Answer
A standard way of correcting for this is by using heteroskedasticity and autocorrelation consistent (HAC) standard errors. They are also known after their developers as Newey-West standard errors. They can be applied in Stata using the
newey
command. The Stata help file for this command is here: http://www.stata.com/help.cgi?neweyThe difficulty in applying these errors is that you need to choose the number of lags that you want the procedure to consider in the autocorrelation structure. The standard autocorrelation tests usually provide good guidance, though.
This approach relies on asymptotics, so large data sets work better here.
There are alternatives, including the block bootstrap. Check out this article for a comparison of approaches to dealing with autocorrelation in panel data:
Bertrand, Marianne, Ester Duflo, and Sendhil Mullainathan. 2004. "How Much Should We Trust Differences-in-Differences Estimates?" Quarterly Journal of Economics. 119(1): 249--275. [prepub version]