This method is generally considered "old-fashioned" so while it may be possible, the syntax is difficult and I suspect fewer people know how to manipulate the anova commands to get what you want. The more common method is using glht
with a likelihood-based model from nlme
or lme4
. (I'm certainly welcome to be proved wrong by other answers though.)
That said, if I needed to do this, I wouldn't bother with the anova commands; I'd just fit the equivalent model using lm
, pick out the right error term for this contrast, and compute the F test myself (or equivalently, t test since there's only 1 df). This requires everything to be balanced and have sphericity, but if you don't have that, you should probably be using a likelihood-based model anyway. You might be able to somewhat correct for non-sphericity using the Greenhouse-Geiser or Huynh-Feldt corrections which (I believe) use the same F statistic but modify the df of the error term.
If you really want to use car
, you might find the heplot vignettes helpful; they describe how the matrices in the car
package are defined.
Using caracal's method (for the contrasts 1&2 - 3 and 1&2 - 4&5), I get
psiHat tStat F pVal
1 -3.0208333 -7.2204644 52.1351067 2.202677e-09
2 -0.2083333 -0.6098777 0.3719508 5.445988e-01
This is how I'd get those same p-values:
Reshape the data into long format and run lm
to get all the SS terms.
library(reshape2)
d <- OBrienKaiser
d$id <- factor(1:nrow(d))
dd <- melt(d, id.vars=c(18,1:2), measure.vars=3:17)
dd$hour <- factor(as.numeric(gsub("[a-z.]*","",dd$variable)))
dd$phase <- factor(gsub("[0-9.]*","", dd$variable),
levels=c("pre","post","fup"))
m <- lm(value ~ treatment*hour*phase + treatment*hour*phase*id, data=dd)
anova(m)
Make an alternate contrast matrix for the hour term.
foo <- matrix(0, nrow=nrow(dd), ncol=4)
foo[dd$hour %in% c(1,2) ,1] <- 0.5
foo[dd$hour %in% c(3) ,1] <- -1
foo[dd$hour %in% c(1,2) ,2] <- 0.5
foo[dd$hour %in% c(4,5) ,2] <- -0.5
foo[dd$hour %in% 1 ,3] <- 1
foo[dd$hour %in% 2 ,3] <- 0
foo[dd$hour %in% 4 ,4] <- 1
foo[dd$hour %in% 5 ,4] <- 0
Check that my contrasts give the same SS as the default contrasts (and the same as from the full model).
anova(lm(value ~ hour, data=dd))
anova(lm(value ~ foo, data=dd))
Get the SS and df for just the two contrasts I want.
anova(lm(value ~ foo[,1], data=dd))
anova(lm(value ~ foo[,2], data=dd))
Get the p-values.
> F <- 73.003/(72.81/52)
> pf(F, 1, 52, lower=FALSE)
[1] 2.201148e-09
> F <- .5208/(72.81/52)
> pf(F, 1, 52, lower=FALSE)
[1] 0.5445999
Optionally adjust for sphericity.
pf(F, 1*.48867, 52*.48867, lower=FALSE)
pf(F, 1*.57413, 52*.57413, lower=FALSE)
Best Answer
The answer to your question reflects your view of overall (or experimentwise) alpha. If, before you collected your data, you set forth planned comparisons (and it sounds like you did that), then after you collect the data you do those and only those comparisons, and there is no reason to look at any other comparisons (AKA contrasts) nor at the overall ANOVA. Why did you look at the overall ANOVA? If you did both planned comparisons and ad hoc comparisons, then you inflated your experimentwise alpha, which some consider a serious error. Of course, you might do ad hoc comparisons to help you plan your future research, but not report them. Also, it seems you have two DVs and did two ANOVAs. In general, if you do two ANOVAs each with p = .05 then you have inflated your experimentwise alpha. Therefore, when doing 2 ANOVAs you might reduce the alpha level for each of your two ANOVAs to maintain your experimentwise alpha. Alternatively, you might do a MANOVA (i.e., a multivariate analysis of variance) which allows you to consider both DVs simultaneously. Of course, you should also be looking at effect sizes, to help you understand the meaning of your data.