In their origin (in SAS), Type III tests are defined via estimable contrasts, and those may or may not correspond to regression coefficients. It seems to me that these tests don't make sense unless you fully understand what those estimable contrasts are. IMHO, it is pretty clear what contrasts are being tested when you are using emmeans::joint_tests()
, but not so much with car::Anova()
, even though the latter often produces the same results. And joint_tests()
can be applied to any set of estimated marginal means, thus allowing you to test different estimable contrasts than those that are mysteriously constructed based on model comparisons.
If you are interested in deciding which models are approriate, Type II tests are a lot more useful as they put a hierarchy on reasonable models.
Addendum
Here's an example for a similar model that makes this more concrete.
First, a model with a covariate and a factor interacting and its Type III ANOVA:
> fiber.lm <- lm(strength ~ diameter * machine, data = fiber)
> (jt <- joint_tests(fiber.lm))
model term df1 df2 F.ratio p.value
diameter 1 9 60.996 <.0001
machine 2 9 2.814 0.1124
diameter:machine 2 9 0.488 0.6293
We can see the associated estimable functions via an attribute:
> attr(jt, "est.fcns")
$diameter
(Intercept) diameter machineB machineC diameter:machineB diameter:machineC
[1,] 0 -0.904534 0 0 -0.3015113 -0.3015113
$machine
(Intercept) diameter machineB machineC diameter:machineB diameter:machineC
[1,] 0 0 -0.0414009 0.0000000 -0.9991426 0.0000000
[2,] 0 0 0.0000000 -0.0414009 0.0000000 -0.9991426
$`diameter:machine`
(Intercept) diameter machineB machineC diameter:machineB diameter:machineC
[1,] 0 0 0 0 -1 0
[2,] 0 0 0 0 0 -1
These are estimable functions of the regression coefficients, and that can be a little confusing to look at. One thing that might make it clearer is to re-parameterize it in terms of the cell means for combinations of machines and a symmetric range of diameters.
> rg <- regrid(ref_grid(fiber.lm, cov.reduce = meanint))
> (jt <- joint_tests(rg))
model term df1 df2 F.ratio p.value
diameter 1 9 60.996 <.0001
machine 2 9 2.814 0.1124
diameter:machine 2 9 0.488 0.6293
> # same ANOVA result!
> attr(jt, "est.fcns")
$diameter
23.1333333333333.A 25.1333333333333.A 23.1333333333333.B 25.1333333333333.B
[1,] -0.4082483 0.4082483 -0.4082483 0.4082483
23.1333333333333.C 25.1333333333333.C
[1,] -0.4082483 0.4082483
$machine
23.1333333333333.A 25.1333333333333.A 23.1333333333333.B 25.1333333333333.B
[1,] -0.5000000 -0.5000000 0.5000000 0.5000000
[2,] -0.2886751 -0.2886751 -0.2886751 -0.2886751
23.1333333333333.C 25.1333333333333.C
[1,] 0.0000000 0.0000000
[2,] 0.5773503 0.5773503
$`diameter:machine`
23.1333333333333.A 25.1333333333333.A 23.1333333333333.B 25.1333333333333.B
[1,] -0.5000000 0.5000000 0.5000000 -0.5000000
[2,] 0.2886751 -0.2886751 0.2886751 -0.2886751
23.1333333333333.C 25.1333333333333.C
[1,] 0.0000000 0.0000000
[2,] -0.5773503 0.5773503
Now it is easier to see that weights are balanced across machines and/or diameters, albeit some of the weights are scaled funny.
Best Answer
This would have more more appropriate for here.
I spent a great deal of time writing the documentation for the function so I'd be interested to know what is not in the documentation.
type='joint'
will provide the combined test for the joint null hypothesis that all of the listed contrasts are zero. The alternative hypothesis is that at least one of them is non-zero. You can use this approach to get ANOVA-like tests but the tests do not have to involve all the levels of the variables (unlike ANOVA which tests for all possible group differences).To shorten the notation suppose you have two predictors x1 (having values A and B) and x2 (having values a, b, c). If you run
you'll get these estimated differences in log odds, and their individual confidence intervals (there is an option to instead get simultaneous confidence intervals): A-a, A-b, A-c. Then you'll get a 3 d.f. Wald $\chi^2$ test to bring evidence against the supposition that all three of these differences in log odds are zero.
But when you provide only one list to the function you are not doing contrasts at all but are just getting predicted values (non-differences). If you paste in the output you got we can take a further look.