Solved – Bootstrap confidence intervals interpretation; too large when testing sample mean

bootstrapconfidence intervalhypothesis testing

I implemented basic, Studentized and percentile bootstrap methods (in Matlab), to do hypothesis testing that a sample mean is significantly different than zero. (I realize from the limited research I've done so far that this may not be the best approach, but I'm currently only trying to replicate previous work.) However, my question is about the interpretation of bootstrap confidence intervals (CI), and not obtaining the expected result when testing my function with random data.

My understanding of the CI interpretation is that if one repeats many time sampling from the population and creating a CI, then the true population parameter will be in the CI "alpha-proportion" of the time. But when I do my test with either uniform or normally distributed random data centered at 0, with alpha 0.05, I get only a proportion of cases of about 0.006 with zero outside the CI. With alpha 0.25, I only find it 0.11 of times outside. The same is true with all 3 bootstrap methods mentioned above. It seems that the CIs I obtain by bootstrap are therefore much too wide. I do a two tailed test at level alpha (such that alpha/2 is on each side), and I checked that both sides contribute.

Is there an explanation for this? Or should I continue trying to find a bug in my code?

Best Answer

Turns out the "wrong" false positive proportions I was seeing were indeed indicative of errors in my code. Thus this was more a programming than a statistical interpretation issue. But I guess this particular error illustrates how sensitive the bootstrap method can be. My resampling method is apparently not quite equivalent to uniform random sampling with replacement, and it generates a wider bootstrap distribution.

Related Question