The fisher.test
function in base R by default returns a confidence interval for the odds ratio in a 2×2 contingency table. For example:
> x <- c(100, 5, 70, 12)
> dim(x) <- c(2,2)
> fisher.test(x)
Fishers Exact Test for Count Data
data: x
p-value = 0.02291
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
1.058526 12.904604
sample estimates:
odds ratio
3.406113
The confidence interval of an odds ratio is an extremely useful thing to know, and I would like to refer to it in an article I am currently writing. My dataset has high enough n for a chi-square test, but the latter would only give me the test statistic and a p-value, which are harder to interpret than the confidence interval of an odds ratio. However, I cannot find any explanation of how the confidence interval is being calculated in this case, nor of what the theoretical precedent might be for calculating confidence intervals of odds ratios as part of a Fisher test (as opposed to a logistic regression).
Can anyone shed some light?
Best Answer
The R help manual cites the Fisher letter to the Australian Journal of Statistics.
In it he notes, by example: