The terminology is probably not used consistently, so the following is only how I understand the original question. From my understanding, the normal CIs you computed are not what was asked for. Each set of bootstrap replicates gives you one confidence interval, not many. The way to compute different CI-types from the results of a set of bootstrap replicates is as follows:
B <- 999 # number of replicates
muH0 <- 100 # for generating data: true mean
sdH0 <- 40 # for generating data: true sd
N <- 200 # sample size
DV <- rnorm(N, muH0, sdH0) # simulated data: original sample
Since I want to compare the calculations against the results from package boot
, I first define a function that will be called for each replicate. Its arguments are the original sample, and an index vector specifying the cases for a single replicate. It returns $M^{\star}$, the plug-in estimate for $\mu$, as well as $S_{M}^{2\star}$, the plug-in estimate for the variance of the mean $\sigma_{M}^{2}$. The latter will be required only for the bootstrap $t$-CI.
> getM <- function(orgDV, idx) {
+ bsM <- mean(orgDV[idx]) # M*
+ bsS2M <- (((N-1) / N) * var(orgDV[idx])) / N # S^2*(M)
+ c(bsM, bsS2M)
+ }
> library(boot) # for boot(), boot.ci()
> bOut <- boot(DV, statistic=getM, R=B)
> boot.ci(bOut, conf=0.95, type=c("basic", "perc", "norm", "stud"))
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 999 bootstrap replicates
CALL :
boot.ci(boot.out = bOut, conf = 0.95, type = c("basic", "perc", "norm", "stud"))
Intervals :
Level Normal Basic Studentized Percentile
95% ( 95.6, 106.0 ) ( 95.7, 106.2 ) ( 95.4, 106.2 ) ( 95.4, 106.0 )
Calculations and Intervals on Original Scale
Without using package boot
you can simply use replicate()
to get a set of bootstrap replicates.
boots <- t(replicate(B, getM(DV, sample(seq(along=DV), replace=TRUE))))
But let's stick with the results from boot.ci()
to have a reference.
boots <- bOut$t # estimates from all replicates
M <- mean(DV) # M from original sample
S2M <- (((N-1)/N) * var(DV)) / N # S^2(M) from original sample
Mstar <- boots[ , 1] # M* for each replicate
S2Mstar <- boots[ , 2] # S^2*(M) for each replicate
biasM <- mean(Mstar) - M # bias of estimator M
The basic, percentile, and $t$-CI rely on the empirical distribution of bootstrap estimates. To get the $\alpha/2$ and $1 - \alpha/2$ quantiles, we find the corresponding indices to the sorted vector of bootstrap estimates (note that boot.ci()
will do a more complicated interpolation to find the empirical quantiles when the indices are not natural numbers).
(idx <- trunc((B + 1) * c(0.05/2, 1 - 0.05/2)) # indices for sorted vector of estimates
[1] 25 975
> (ciBasic <- 2*M - sort(Mstar)[idx]) # basic CI
[1] 106.21826 95.65911
> (ciPerc <- sort(Mstar)[idx]) # percentile CI
[1] 95.42188 105.98103
For the $t$-CI, we need the bootstrap $t^{\star}$ estimates to calculate the critical $t$-values. For the standard normal CI, the critical value will just be the $z$-value from the standard normal distribution.
# standard normal CI with bias correction
> zCrit <- qnorm(c(0.025, 0.975)) # z-quantiles from std-normal distribution
> (ciNorm <- M - biasM + zCrit * sqrt(var(Mstar)))
[1] 95.5566 106.0043
> tStar <- (Mstar-M) / sqrt(S2Mstar) # t*
> tCrit <- sort(tStar)[idx] # t-quantiles from empirical t* distribution
> (ciT <- M - tCrit * sqrt(S2M)) # studentized t-CI
[1] 106.20690 95.44878
In order to estimate the coverage probabilities of these CI-types, you will have to run this simulation many times. Just wrap the code into a function, return a list with the CI-results and run it with replicate()
like demonstrated in this gist.
This confusion between bootstrap procedures and Monte Carlo procedures keeps recurring, so perhaps this is as good a place as any to address it. (The examples of R
code may help with the homework, too.)
Consider this implementation of the bootstrap in R
:
boot <- function(x, t) { # Exact bootstrap of procedure t on data x
n <- length(x) # Must lie between 2 and 7 inclusive.
if (n > 7) {
stop("Sample size exceeds 7; use an approximate method instead.")
}
p <- c(n, 1:(n-1))
a <- rep(x, n^(n-1))
dim(a) <- rep(n, n)
y <- as.vector(a)
while (n > 1) {
n <- n-1
a <- aperm(a, p)
y <- cbind(as.vector(a), y)
}
apply(y, 1, t)
}
A quick look will confirm that this is a deterministic calculation: no random values are generated or used. (I will leave the details of its inner workings for interested readers to figure out for themselves.)
The arguments to boot
are a batch of numeric data in the array x
and a reference t
to a function (that can be applied to arrays exactly like x
) to return a single numeric value; in other words, t
is a statistic. It generates all possible samples with replacement from x
and applies t
to each of them, thereby producing one number for each such sample: that's the bootstrap in a nutshell. The return value is an array representing the exact bootstrap distribution of t
for the sample x
.
As a tiny example, let's bootstrap the mean for a sample x
= c(1,3)
:
> boot(c(1,3), mean)
> [1] 1 2 2 3
There are indeed four possible samples of size $2$ with replacement from $(1,3)$; namely, $(1,1)$, $(1,3)$, $(3,1)$, and $(3,3)$. boot
generates them all (in the order just listed) and applies t
to each of them. In this case t
computes the mean and those turn out to be $1$, $2$, $2$, and $3$, respectively, as shown in the output.
Where you go from here depends on how you want to use the bootstrap. The full information about the bootstrap is contained in this output array, so it's usually a good idea to display it. Here is an example where the standard deviation is bootstrapped from the sample $(1,3,3,4,7)$:
hist(boot(c(1,3,3,4,7), sd))
Now we are prepared to talk about Monte Carlo simulation. Suppose, say, we were going to bootstrap a 95% upper confidence limit on the SD from a sample of $5$ by using the upper 95th percentile of its bootstrap distribution. What properties would this procedure have? One way to find out is to suppose the sample were obtained randomly from, say, a uniform distribution. (The application will often indicate what a reasonable distributional assumption may be; here, I arbitrarily chose one that is simple for computation but not easy to deal with analytically.) We can simulate what happens by taking such a sample and computing the UCL:
> set.seed(17)
> quantile(boot(runif(5, min=0, max=10), sd), .95)[1]
95%
3.835870
The result for this particular random sample is 3.83587. This is definite: were you to call boot
again with the same set of data, the answer would be exactly the same. But how might the answer change with different random samples? Find out by repeating this process a few times and drawing a histogram of the results:
> boot.sd <- replicate(100, quantile(boot(runif(5, min=0, max=10), sd), .95)[1])
> hist(boot.sd)
Were we to do another set of simulations, the random draws would come out different, creating a (slightly) different histogram--but not very different from this one. We can use it with some confidence to understand how the bootstrap UCL of the SD is working. For reference, notice that the standard deviation of a uniform distribution (spanning the range from $0$ to $10$ as specified here) equals $10/\sqrt{12} \approx 2.887$. As one would hope for any UCL worth its salt, the majority (three-quarters, or 0.75) of the values in the histogram exceed this:
> length(boot.sd[boot.sd >= 10/sqrt(12)]) / length(boot.sd)
[1] 0.75
But that's nowhere near the nominal 95% we specified (and were hoping for)! This is one value of simulation: it compares our hopes to what is really going on. (Why the discrepancy? I believe it is because bootstrapping an SD does not work well with really small samples.)
Review
Bootstrap statistics are conceptually just the same as any other statistic like a mean or standard deviation; they just tend to take a long time to compute. (See the warning message in the boot
code!)
Monte-Carlo simulation can be useful for studying how a bootstrap statistic varies due to randomness in obtaining samples. The variation observed in such simulation is due to variation in the samples, not variation in the bootstrap.
(Not illustrated here) Because bootstrap statistics can take a lot of computation (apparently, up to $n^n$ calculations for samples of size $n$), it is convenient to approximate the bootstrap distribution. This is usually done by creating a "black box" program to obtain one value at random from the true bootstrap distribution and calling that program repeatedly. The collective output approximates the exact distribution. The approximation can vary due to randomness in the black box--but that variation is an artifact of the approximation procedure. It is not (conceptually) inherent in the bootstrap procedure itself.
Best Answer
The link in your question describes an exact bootstrap distribution where every possible way of resampling is being computed. For large samples this can be unfeasible due to the amount of computations required and an alternative is to use random resampling from the sample.
So your bootstrap distribution (based on which confidence interval can be approximated) is created from one single observed sample. And that one single sample is resampled multiple times to compute the bootstrap distribution of a statistic that describes the distribution.
Your homework asks to make another level of repetition, by repeating the above described bootstrap procedure (created for a single sample) with multiple simulations of observed samples, and record the properties of the confidence interval.
So you create $N$ simulations of an observed sample. For each observed sample you resample $M$ times to create estimates of a bootstrap distribution and estimates of a confidence interval. You will end up with $N$ confidence intervals, one for each simulated observation.