I have two questions about Bonferroni adjustments:
1). Can one use the Bonferroni method to compare independent groups? The reason why I ask this is it seems that many examples I've encountered discuss the Bonferroni method in the context of comparing dependent groups – for example, multiple comparisons after repeated measures ANOVA.
2). I created a set of simulated data (see code below for the reproducible dataset).
set.seed(123)
data<-data.frame(x=rep(letters[1:4], each=5), y=sort(rlnorm(20)))
Then, I used pairwise.t.test()
and set p.adj="bonf"
(see below) to test pairwise comparisons.
pairwise.t.test(x=data$y, g=data$x, p.adj="bonf") #see results below:
# data: data$y and data$x
# a b c
# b 1.00000 - -
# c 0.38945 1.00000 -
# d 8.3e-06 3.5e-05 0.00031
# P value adjustment method: bonferroni
However, these results are different from the results obtained by doing pairwise t-tests using t.test()
and then adjusting for the p-values (see below)
t.test(y~x, data[data$x=="a" | data$x=="b",])$p.value*6
t.test(y~x, data[data$x=="a" | data$x=="c",])$p.value*6
t.test(y~x, data[data$x=="a" | data$x=="d",])$p.value*6
t.test(y~x, data[data$x=="b" | data$x=="c",])$p.value*6
t.test(y~x, data[data$x=="b" | data$x=="d",])$p.value*6
t.test(y~x, data[data$x=="c" | data$x=="d",])$p.value*6
The results are below:
# a vs. b = 0.0788128848
# a vs. c = 0.0001770066
# a vs. d = 0.0324680659
# b vs. c = 0.0137812904
# b vs. d = 0.0488036762
# c vs. d = 0.0970799045
These adjusted p-values are rather different from the ones obtained from individual t-tests. So I wonder why there are such big differences.
Best Answer