Solved – Standard error for a statistic obtained via simulation

regressionsimulationvariance

I am simulating the effect of certain conditions on estimates obtained using OLS regression. In running 100 replications I get 100 sets of standard errors for the regression coefficients. How do I calculate the standard error of the estimates? Is it simply the standard deviation of the estimated standard errors for a given coefficient divided by the square root of the number of replications?

Thanks in advance.

Best Answer

I do not know what you mean with "certain conditions" but if you want to calculate the uncertainty in the regression coefficients via simulation, you have to take account of the uncertainty about the residual standard deviation in general (because Var$(\,\hat{\beta}\,|\,X\,) = \sigma^2\,(X^TX)^{-1}$, $\sigma^2$ should be $\chi^2_{n-k}$ distributed, but maybe here is no uncertainty in your case) as well as the regression coefficients (which are often assumed to be multivariate normal distributed).

There is a nice chapter about simulations in

Gelman, A., & Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press

which provides also an R function "sim" (in the arm package) to do want you want to do. You can check how they programmed this function in R to get an impression what you have to do, or you read their chapter about simulations in the book.

Related Question