The percentage explained depends on the order entered.
If you specify a particular order, you can compute this trivially in R (e.g. via the update
and anova
functions, see below), but a different order of entry would yield potentially very different answers.
[One possibility might be to average across all orders or something, but it would get unwieldy and might not be answering a particularly useful question.]
--
As Stat points out, with a single model, if you're after one variable at a time, you can just use 'anova' to produce the incremental sums of squares table. This would follow on from your code:
anova(fit)
Analysis of Variance Table
Response: dv
Df Sum Sq Mean Sq F value Pr(>F)
iv1 1 0.033989 0.033989 0.7762 0.4281
iv2 1 0.022435 0.022435 0.5123 0.5137
iv3 1 0.003048 0.003048 0.0696 0.8050
iv4 1 0.115143 0.115143 2.6294 0.1802
iv5 1 0.000220 0.000220 0.0050 0.9469
Residuals 4 0.175166 0.043791
--
So there we have the incremental variance explained; how do we get the proportion?
Pretty trivially, scale them by 1 divided by their sum. (Replace the 1 with 100 for percentage variance explained.)
Here I've displayed it as an added column to the anova table:
af <- anova(fit)
afss <- af$"Sum Sq"
print(cbind(af,PctExp=afss/sum(afss)*100))
Df Sum Sq Mean Sq F value Pr(>F) PctExp
iv1 1 0.0339887640 0.0339887640 0.77615140 0.4280748 9.71107544
iv2 1 0.0224346357 0.0224346357 0.51230677 0.5137026 6.40989591
iv3 1 0.0030477233 0.0030477233 0.06959637 0.8049589 0.87077807
iv4 1 0.1151432643 0.1151432643 2.62935731 0.1802223 32.89807550
iv5 1 0.0002199726 0.0002199726 0.00502319 0.9468997 0.06284931
Residuals 4 0.1751656402 0.0437914100 NA NA 50.04732577
--
If you decide you want several particular orders of entry, you can do something even more general like this (which also allows you to enter or remove groups of variables at a time if you wish):
m5 = fit
m4 = update(m5, ~ . - iv5)
m3 = update(m4, ~ . - iv4)
m2 = update(m3, ~ . - iv3)
m1 = update(m2, ~ . - iv2)
m0 = update(m1, ~ . - iv1)
anova(m0,m1,m2,m3,m4,m5)
Analysis of Variance Table
Model 1: dv ~ 1
Model 2: dv ~ iv1
Model 3: dv ~ iv1 + iv2
Model 4: dv ~ iv1 + iv2 + iv3
Model 5: dv ~ iv1 + iv2 + iv3 + iv4
Model 6: dv ~ iv1 + iv2 + iv3 + iv4 + iv5
Res.Df RSS Df Sum of Sq F Pr(>F)
1 9 0.35000
2 8 0.31601 1 0.033989 0.7762 0.4281
3 7 0.29358 1 0.022435 0.5123 0.5137
4 6 0.29053 1 0.003048 0.0696 0.8050
5 5 0.17539 1 0.115143 2.6294 0.1802
6 4 0.17517 1 0.000220 0.0050 0.9469
(Such an approach might also be automated, e.g. via loops and the use of get
. You can add and remove variables in multiple orders if needed)
... and then scale to percentages as before.
(NB. The fact that I explain how to do these things should not necessarily be taken as advocacy of everything I explain.)
If you've used a stepwise method (& see Algorithms for automatic model selection for the drawbacks), you can show the current model at each step (more usual for exposition of the method than because of any perceived intrinsic interest of each intermediate model, I'd have thought). Otherwise there's no point: as @charles says, it's common to compare models suggested by competing theories, or that differ in the expense of using them for prediction, or (in general) for reasons that depend on what the models say about the things they model.
It may be tempting to view the change in the coefficient of determination as you add each predictor as a measure of its importance for or contribution to the model's predictive power; but if the predictors are correlated, as they typically will be for observational data, this can be quite misleading—you get different answers by changing the order in which you add predictors. Jeromy Anglim's blog discusses the issues, & suggests better measures.
Best Answer
I suspect you are asking about the different kinds of sums-of-squares and nested hypothesis tests. The two primary kinds of SS that people worry about are type I SS and type III SS. I have written about this topic several times on CV; you may want to read some of these answers (primarily here and here, but also here and here), to get more information about this issue. In a nutshell, this is about how the sums of squares are partitioned and what SS gets used for the numerator of an F (actually F-change) test. (I discuss the F-change test in a different context here.) Specifically, is the SS determined by dropping each predictor from the final model, with all the other terms still included, or are they dropped in order such that (for example) the first predictor is still out when the second one is dropped? In my previous answers about SS, I emphasized whether all of the information is being used, but, perhaps even more important, we should also notice that these two are answers to different substantive questions. If you want to know if A is related to the DV even after having taken account of the relationships between B and C and the DV, then you should either use type III SS to test it, or (which amounts to the same thing) type I SS where A is dropped first and B & C are still in the model. Moreover, note that the type of SS used is irrelevant if the predictors are perfectly orthogonal. Lastly, it's important to recognize that the F test associated with A for a model in which A is the only predictor, is substantively different from the F-change test in which A is dropped last, because the denominator of F differs.