Monte Carlo – Standard Deviation of Estimated Parameters in Monte Carlo Simulation

monte carloregressionstandard deviationstandard error

I am new to Monte Carlo simulation and have a question. What is the connection between the standard errors of the estimates that we normally get from a regression and standard deviation of sampling distribution for the same parameter that we get from MC simulation. I notice that the mean of these two over several repetitions are very close. Conceptually I can understand that they should be close but still I am not quite clear about it! and what do we understand if these two values are not close? Is there any theoretical proof to show the connection between these two? Any clarification is much appreciated.

Best Answer

This is a great question. If we conceptualize repeating the experiment an infinite number of times then there is a one true standard error that we will never know. The standard error estimate produced my most regression packages replaces unknown parameters with estimates of these parameters. When you perform a simulation you can capture the estimated standard error in each simulated run and then plot these using a histogram. This shows the sampling variability of the standard error estimator. You can compare these standard error estimates to the Monte Carlo standard error. The Monte Carlo standard error can be brought arbitrarily close to the one true standard error with enough simulated runs.

In non-normal models the Wald, score, and likelihood ratio tests treat the estimated standard error as known (as if it is the one true standard error). In a normal model the t- and F- tests are special in that they account for the sampling variability of the standard error estimator.

Related Question