Impossible to know because you don't know the base rate at which H0 is actually true or false. That being said...
You are already 'protected' against type I error when you restrict yourself to doing posthoc analyses only on elements of models that were statistically significant overall. In addition, posthoc correction procedures tend towards draconian criteria for evidence. Therfore, I'd tend to think that you'd be drifting towards increasing your risk of notable Type II error. So, I'd suggest that (if your peers will allow it) you look at the magnitude of the effects from your overall analysis and not fiddle around with posthocs.
You might try the lsmeans package, as it makes some of this stuff easier and clearer. To get the results on the back-transformed scale (include option type="response"
), do
require(lsmeans)
lsm <- lsmeans(binomial.glmm, ~ factor1 * factor2)
summary(lsm, type = "response")
To see the results graphically, do
plot(lsm, by = "factor2", intervals = TRUE, type = "response")
or, for an interaction-plot style,
lsmip(lsm, factor1 ~ factor2, type = "response")
In a mixed model like this, it is often the case that the SEs of these least-squares means (AKA predictions) will be much larger than those of some or all of the pairwise differences, because the between-subjects variations cancel out in those comparisons. So the displayed CIs can be very misleading for comparing the predictions (and you shouldn't use CIs to do comparisons in any case).
To get the Tukey-adjusted comparisons, do
summary(pairs(lsm), type = "response")
(This actually computes the differences on the logit scale, then back-transforms, so that the results are odds ratios. If you want differences of proportions instead, do pairs(regrid(lsm))
instead.)
Best Answer
If you are interested in visualizing an interaction effect specifically, you can subtract main effects (i.e., average factor effect, say $x_i$ and $x_j$) from each treatment mean (combination of factor levels, indexed by $i$ and $j$) based on the relation
$$\gamma_{ij} = \bar x_{ij} - \bar x_i - \bar x_j + \bar x$$
This will yield $i$ (or $j$) curves where every value are expressed as deviation from a baseline which is simply the grand mean ($\bar x$). This idea is developed in Howell, Statistical Methods for Psychology. Below is an illustration with one of Howell's dataset (a study on number of words recalled as a function of subjects' age and recall condition, N=100).