That comparison of one with the mean of all later variables is (aside from scale), called Helmert coding or Helmert contrasts. The one you give is the first contrast, the other would be a scaled version of $(0, 1, -1)^\top$.
What R calls helmert coding, this calls 'reverse Helmert'. They're equivalent up to a change of variable order.
I thought I would explain what I ended up doing here in case it's helpful to anyone else.
Step 1: Fit the lme with effects coding
library(MASS)
library(lme4)
library(psycholing)
library(lmerTest)
contrasts(data$Group) = contr.sum(2)
contrasts(data$A) = contr.sum(2)
contrasts(data$B) = contr.sum(3)
lme = lmer(respVar ~ 1 + Group*A*B + (1|Subject) + (1|Item), control=lmerControl(optCtrl=list(maxfun=100000)), data=data)
I performed model selection using sum coding and then tested the overall significance of each coefficient using anova from the lmerTest package:
lmerTest::anova(lme)
This gave me a significant Group x A x B three-way interaction.
Step 2: Switch to dummy coding and fit three models, with each level of B as the intercept.
contrasts(data$Group) = contr.treatment(2)
contrasts(data$A) = contr.treatment(2)
contrasts(data$B) = contr.treatment(3)
# N.b. these are the default contrasts in R. contrasts(data$B)
# B2 B3
# B1 0 0
# B2 1 0
# B3 0 1
lmeB1 = lmer(respVar ~ 1 + Group*A*B + (1|Subject) + (1|Item), control=lmerControl(optCtrl=list(maxfun=100000)), data=data)
b1sum = lmerTest::summary(lmeB1)
relevel(data$B, "B2")
lmeB2 = lmer(respVar ~ 1 + Group*A*B + (1|Subject) + (1|Item), control=lmerControl(optCtrl=list(maxfun=100000)), data=data)
b2sum = lmerTest::summary(lmeB2)
relevel(data$B, "B3")
lmeB3 = lmer(respVar ~ 1 + Group*A*B + (1|Subject) + (1|Item), control=lmerControl(optCtrl=list(maxfun=100000)), data=data)
b3sum = lmerTest::summary(lmeB3)
Step 3: Extract the contrasts of interest and apply a Bonferroni-Holm correction for multiple comparisons.
# Test the contrasts:
# 1) Group1 A1 B1 vs. Group1 A1 B2/B3
# 2) Group1 A1 B1/B2/B3 vs. Group2 A1 B1/B2/B3
# 3) Group1 A1 B1/B2/B3 vs. Group1 A2 B1/B2/B3
pvals = cbind("B1"=p.adjust(b1sum$coefficients[c(12, 11, 8, 3), 5], "holm"),
"B2"=c(9,9,p.adjust(b2sum$coefficients[c(8, 3), 5],"holm")),
"B3"=c(9,9,p.adjust(b3sum$coefficients[c(8, 3), 5],"holm")))
# Numbers correspond to the rows with the coefficients of interest in model$coefficients, column 5 contains the p-values.
# Reference Level=Group1
# B1 B2 B3
# 1a) B2:A1 0.001707473
# 1b) B3:A1 0.027679733
# 2) Group2:A2 0.016903682 0.0328017681 0.9451504
# 3) A2 0.127490731 0.0008424514 0.1002219
Note that I did this in R because I also included a fixed effect for participant gender, which I coded as c(0.5, -0.5) to centre the estimates on the mean of both (effectively "controlling for" gender). This is easier to do in R with the contrasts function: in MATLAB, it seems you have to specify the entire design matrix manually if you want to use something other than effects or dummy coding.
If you don't need custom contrasts, this whole process can be done much more easily in MATLAB by fitting the model with the default (dummy) variable coding:
lme = fitlme(data, 'respVar ~ 1 + Group*A*B + (1|Subject) + (1|Item)', 'FitMethod', 'REML', 'CheckHessian', true);
Then use coefTest to specific contrast matrices for your coefficients. The following gives me an F test for the contrast between my second and third coefficients---B2 and B3 in this case---with a Satterthwaite approximation for degrees of freedom. (See this reference for a discussion of significance testing for LMEs: https://doi.org/10.3758/s13428-016-0809-y)
[pval,F,DF1,DF2]=coefTest(lme, [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0; 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'DFMethod', 'Satterthwaite')
Best Answer
Yes. You can easily verify this by carrying out the following steps:
First, express the means of $A$, $B$, and $C$, in terms of the model with the specified contrast: \begin{eqnarray*} 1\hat{\beta}_{0}-2\hat{\beta}_{1}+0\hat{\beta}_{2} & = & \hat{\mu}_{A}=E(Y_{A})\\ 1\hat{\beta}_{0}+1\hat{\beta}_{1}-1\hat{\beta}_{2} & = & \hat{\mu}_{B}=E(Y_{B})\\ 1\hat{\beta}_{0}+1\hat{\beta}_{1}+1\hat{\beta}_{2} & = & \hat{\mu}_{C}=E(Y_{C}) \end{eqnarray*}
Here, each $\hat{\mu}_i$ represents the group mean of group $i$, $i=A, B, C$. Next, place each beta coefficient into a matrix augmented with the means on the right, and place the matrix in row-reduced echelon form using Guass-Jordan elimination:
\begin{eqnarray*} \begin{bmatrix}1 & -2 & 0 & | & \hat{\mu}_{A}\\ 1 & 1 & -1 & | & \hat{\mu}_{B}\\ 1 & 1 & 1 & | & \hat{\mu}_{C} \end{bmatrix} & \sim & \begin{bmatrix}1 & -2 & 0 & | & \hat{\mu}_{A}\\ 0 & 3 & -1 & | & \hat{\mu}_{B}-\hat{\mu}_{A}\\ 0 & 3 & 1 & | & \hat{\mu}_{C}-\hat{\mu}_{A} \end{bmatrix}\\ & \sim & \begin{bmatrix}1 & -2 & 0 & | & \hat{\mu}_{A}\\ 0 & 3 & -1 & | & \hat{\mu}_{B}-\hat{\mu}_{A}\\ 0 & 0 & 2 & | & \left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right) \end{bmatrix}\\ & \sim & \begin{bmatrix}1 & -2 & 0 & | & \hat{\mu}_{A}\\ 0 & 3 & -1 & | & \hat{\mu}_{B}-\hat{\mu}_{A}\\ 0 & 0 & 1 & | & \frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right] \end{bmatrix}\\ & \sim & \begin{bmatrix}1 & -2 & 0 & | & \hat{\mu}_{A}\\ 0 & 1 & 0 & | & \frac{1}{3}\left\{ \left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)+\frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right]\right\} \\ 0 & 0 & 1 & | & \frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right] \end{bmatrix}\\ & \sim & \begin{bmatrix}1 & 0 & 0 & | & \hat{\mu}_{A}+\frac{2}{3}\left\{ \left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)+\frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right]\right\} \\ 0 & 1 & 0 & | & \frac{1}{3}\left\{ \left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)+\frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right]\right\} \\ 0 & 0 & 1 & | & \frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right] \end{bmatrix} \end{eqnarray*}
So, now, we know that the first pivot position corresponds to:
\begin{eqnarray*} \hat{\beta}{}_{0} & = & \hat{\mu}_{A}+\frac{2}{3}\left\{ \left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)+\frac{1}{2}\left[\left(\hat{\mu}_{C}-\hat{\mu}_{A}\right)-\left(\hat{\mu}_{B}-\hat{\mu}_{A}\right)\right]\right\} \\ & = & \hat{\mu}_{A}-\frac{2}{3}\hat{\mu}_{A}-\frac{1}{3}\hat{\mu}_{A}+\frac{1}{3}\hat{\mu}_{A}+\frac{2}{3}\hat{\mu}_{B}-\frac{1}{3}\hat{\mu}_{B}+\frac{1}{3}\hat{\mu}_{C}\\ & = & \frac{1}{3}\hat{\mu}_{A}+\frac{1}{3}\hat{\mu}_{B}+\frac{1}{3}\hat{\mu}_{C}\\ & = & \frac{\hat{\mu}_{A}+\hat{\mu}_{B}+\hat{\mu}_{C}}{3} \end{eqnarray*}
The final expression indicates that $\hat{\beta}{}_{0}$, the intercept, represents the simple mean of the group means.