Confidence Interval – How to Relate the Central Limit Theorem to Confidence Intervals

central limit theoremconfidence interval

I'm having trouble understanding how the Central Limit Theorem (CLT) implies that we can create confidence intervals as we do. For example, Slide 5 from these lecture notes essentially lays out the following logic for how the CLT can be used to construct a confidence interval:

We have a point estimate $\bar{X}$ for the population mean $\mu$, but we want to design a “net” to have a reasonable chance of capturing $\mu$.

  1. From the CLT we know that we can think of $\bar{X}$ as a sample from $N(\mu, \sigma^2/n)$

  2. Therefore, 96% of samples from the population should have $\bar{X}$s within 2 SEs ($2\sigma/\sqrt{n}$) of $\mu$.

  3. Therefore, for 96% of samples from the population, $\mu$ must be within 2 SEs of $\bar{X}$.

I'm good with points 1 and 2 shown above, but I don't understand how those two points (or anything else that the CLT says) can be used to come up with point 3. In other words, how does point 3 follow from points 1 and 2? It seems to me that the CLT speaks to how confident we can be that a sample mean will fall within some interval surrounding the population mean (i.e., point 2), as opposed to saying how confident we can be that some interval surrounding a sample mean will contain the population mean (i.e., point 3).

Best Answer

Probability interval using a pivotal quantity: Confidence intervals are formed from an underlying probability interval for a pivotal quantity. In the present case, if $\sigma$ is treated as known, and if $n$ is large enough to justify the required distributional approximation, then you have the pivotal quantity:

$$\frac{\bar{X} - \mu}{\sigma / \sqrt{n}} \overset{\text{Approx}}{\sim} \text{N}(0,1).$$

This result comes from application of the central limit theorem (CLT), assuming that the underlying distribution meets the requirements of the theorem (e.g., finite variance) and a sufficiently large value of $n$. Using this pivotal quantity you can obtain the following probability interval:

$$\mathbb{P} \Bigg( - z_{\alpha/2} \leqslant \frac{\bar{X} - \mu}{\sigma / \sqrt{n}} \leqslant z_{\alpha/2} \Bigg) \approx 1- \alpha.$$

(Note that the value $z_{\alpha/2}$ is the critical value of the standard normal distribution having an upper-tail probability of $\alpha/2$.$^\dagger$) Re-arranging the inequalities inside the probability statement you obtain the equivalent probability statement:

$$\mathbb{P} \Bigg( \bar{X} - \frac{z_{\alpha/2}}{\sqrt{n}} \cdot \sigma \leqslant \mu \leqslant \bar{X} + \frac{z_{\alpha/2}}{\sqrt{n}} \cdot \sigma \Bigg) \approx 1- \alpha.$$

This shows that there is a fixed probability that the unknown mean parameter $\mu$ will fall within the stated bounds. Note here that the sample mean $\bar{X}$ is the random quantity in the expression, so the statement expressed the probability that a fixed parameter value $\mu$ falls within the random bounds of the interval.


The confidence interval: From here, we form the confidence interval by substituting the observed sample mean, yielding the $1-\alpha$ level confidence interval:

$$\text{CI}_\mu(1-\alpha) = \Bigg[ \bar{x} \pm \frac{z_{\alpha/2}}{\sqrt{n}} \cdot \sigma \Bigg].$$

We refer to this as a "confidence interval" (as opposed to a probability interval) since we have now substituted the random bounds with observed bounds. Note that the mean parameter is treated as fixed, so the interval either does or does not contain the parameter; no non-trivial probability statement is applicable here.

Note that this particular confidence interval assumes that $\sigma$ is known. It is generally the case that this parameter is not known, and so we commonly derive a slightly different confidence interval which substitutes the variance parameter with the sample variance. This interval has a similar derivation, using a pivotal quantity that has a Student's T distribution.


Some further comments on your notes: The notes you have linked to seem to me to be pretty good on the whole. However, it is unfortunately the case that explanations of confidence intervals in statistics courses often skip over the actual derivation of the interval, and there is often a lot of rough hand-waving, in terms of explanation. The logic presented in the linked notes is typical of the kind of vague explanation that is often given in introductory courses, where lecturers tend to prefer to minimise mathematics.

Personally, I am not a fan of these kinds of vague explanations, especially since it is not terribly difficult to show the mathematical derivation of the interval. Some lecturers in this field regard the mathematical derivation as being too complicated to assist introductory students, and so they omit it, but I personally think it is more confusing to students to try to muddle out the logic behind the interval without a clear presentation of its derivation.

You can see from the above mathematics that the confidence interval is formed by analogy to an actual probability interval, which can be formed by re-arranging a simple probability statement for the pivotal quantity in the analysis. Once you understand the derivation of the probability interval, understanding the analogy to the confidence interval is quite simple.


$^\dagger$ The critical point is defined mathematically as the (implicit) solution to:

$$\frac{\alpha}{2} = \frac{1}{\sqrt{2 \pi}} \int \limits_{z_{\alpha/2}}^\infty \exp (-\tfrac{1}{2} r^2) dr.$$

Related Question