Hypothesis Testing – Matching Confidence Limits with One-Sided Tests

confidence intervalhypothesis testing

I'm trying to understand (esp. the boldfaced part) how THIS ARTICLE (see p. 174, top-right corner) is suggesting that:

"to use the confidence intervals to test a statistical hypothesis
and to maintain a Type I error rate at alpha:

  1. When testing a two-sided hypothesis at the alpha
    level, use a 100(1 – alpha)% confidence interval.

2. When testing a one-sided hypothesis at the alpha
level, USE a 100(1 – alpha / 2)% confidence interval.
"

Best Answer

If you generate the two-sided confidence interval with a confidence level of $95\%$ (or $\alpha_1 = 5\%$), the cut-off points (or endpoints) of the interval will leave out a probability of a type I error of $\frac{1}{2} \alpha=2.5\%$ on either end.

If you are performing a one-sided test, and want to preserve a risk $\alpha = 5\%$ of rejecting the null when it is in fact true, you will want to generate a two-sided CI with and confidence level of $90\%$ to leave $5\%$ probability at each end.

So you double the initial $\alpha_1 =5\%$ to $\alpha_2=2\alpha_1=10\%.$

Hence the quote:

  1. When testing a two-sided hypothesis at the alpha level, use a $100(1 - \alpha )\%$ confidence interval.
  2. When testing a one-sided hypothesis at the alpha level, use a $100(1 - 2 \alpha)\%$ confidence interval.

enter image description here