Even without Bonferroni corrections ANOVA's do not guarantee any two means are different. For example, in a statistically decisive ANOVA result could come from two pairs of means are different from each other while no individual mean comparison is significant.
Consider why you run an ANOVA. You do it because if you did all of the comparisons with a categorical predictor value then you'd run into a multiple comparisons problem. But then you go and do many of the comparisons... why? The ANOVA means that the pattern of data you see is meaningful. Describe the pattern of data, both in a figure and text, and convey what your data mean. If you really wanted to run all of the multiple comparisons then running the ANOVA was pointless. Also, keep in mind that "all of the comparisons" does not mean just those comparisons between individual means but all of the patterns patterns and combinations you could test, the ANOVA is sensitive to them too.
In your particular case, what you would do is write something like the following. There was a main effect of group, with higher scores in the experimental group and a main effect of time with the first time the lowest score, followed by the last time and finally the highest score was at the intermediate time. However, each of these main effects was qualified by an interaction. The effect of time depends on which group you are in, being greater in the experimental than the control group.
That's what your ANOVA and summary statistics say. Unless there's something more than that you want to say there's no point in running comparisons.
ASIDE: While the following is important, I consider it an aside because the primary question here is interpreting your ANOVA. Your experimental group time 2 variance is so much higher than the others that you're violating assumptions of the ANOVA. You could run simulations to see how much that affects alpha or power in your case. I did a quick one and it shows alpha is generally about 0.06 (if you select 0.05) for each test, sample code below:
nsamp <- 2000
n <- 10
sds <- rep(c(1.36, 1.57, 1.48, 1.14, 3.52, 1.78), n)
x1 <- factor(rep(1:2, times = n, each = 3))
x2 <- factor(rep(1:3, 2*n))
Y <- replicate(nsamp, {
y <- rnorm(6 * n, 0, sds)
#y <- rnorm(6*n) # comment out the line above and comment in this one to see what would happen if variances were equal
m <- aov(y ~ x1 * x2)
sm <- summary(m)
ps <- sm[[1]]$'Pr(>F)'
ps
#min(ps, na.rm = TRUE)
})
sum(Y[1,] < 0.05)/nsamp
sum(Y[2,] < 0.05)/nsamp
sum(Y[3,] < 0.05)/nsamp
If you have a categorical dependent variable with two levels, you do not want ANOVA or GLM (in SAS terminology), you want logistic regression (which, confusingly, is part of glm in R terminology.
(that confusion comes from the difference between general linear model and generalized linear model, which unfortunately have the same acronym).
Best Answer
OK, you have variable GROUP with 2 values and dependent variables HIT1 HIT2 CR1 CR2 FA1 FA2 MISS1 MISS2. I assume that the DVs are all interval (scale) measurement level and approximately normal distribution.
You go to General Linear - Repeated Measures menu. Enter within-subject factor name TIME with 2 levels. Enter 4 measures names: HIT, CR, FA, MISS. Press Define, allocate your DVs properly in Within-subject variables field, move GROUP in Between-subject factors field. If you Paste your syntax you will get this
Now you may run your analysis. Find within-subject (TIME) effects and interaction effect (TIME*GROUP) in table Univariate tests. Find effect of GROUP alone in table Test of Between-subject effects.
There are also other tables, including Multivariate effects, which I won't comment on in this pretty rough instruction. Good luck.