Mathematical Statistics – Clopper Pearson Confidence Interval for 100% Explained

confidence intervaldescriptive statisticsmathematical-statistics

I was using the following site: https://www.medcalc.org/calc/diagnostic_test.php to generate confidence intervals for some data I collected. One of the data-subsets had 81 true positives and 0 false positives, false negatives, and true negatives. The resulting confidence interval for the sensitivity was 95.55% – 100.00% even though the sensitivity for the sample is 100%.

This seems off and I'm wondering if this is due to methodology (from the website's documentation it creates exact Clopper Pearson intervals) not being well defined for extreme values like this one. It seems strange to present the result as 100% with a confidence interval that isn't (100-100).

Best Answer

A confidence interval is a range that is likely to contain the true parameter value. A confidence interval width of 0 indicates absolutely no uncertainty whatsoever in your parameter estimate. You observed 81 true positives out of 81 cases, so your best estimate of sensitivity is 100%. But that doesn't mean that your model definitely, truly has exactly 100% sensitivity - if your model is actually 99.999% sensitive, it would be utterly unsurprising to test only 81 cases and never see a false negative. From your limited number of cases, your model could have sensitivity as low as 95.55%, and it would still be statistically "reasonable" to observe 81 true positives and no false negatives. As you get more data, the confidence interval will typically get narrower, but no finite amount of data can make it have a width of zero - even if you observe 100% sensitivity on a billion samples, that's still not sufficient to say that sensitivity is truly exactly 100% and that misclassifications are outright impossible.

Related Question