Solved – How to refer to AIC model-averaged parameters and confidence intervals

aicmodel selection

I am writing up results from regression analysis where I used AICc model averaging to arrive at my final parameter estimates. I am wondering how best to refer to these parameters and their 95% confidence intervals. It seems like "significantly different" is taboo in the AIC world, but writing out "the parameter was x.x and its CI does not cross zero" seems much more laborious to me and the reader than saying "x.x was significantly different from zero."

This seems like it might be an issue that would not come up if I had just selected the lowest AICc as my best model, which is what many folks do (against Burnham and Anderson repeatedly stating otherwise). Selecting the best model let's you say "the parameter is important b/c it is in the final model."

Also, I'm wondering if there is an AIC model averaged equivalent to "marginally significant." I have parameters that have the predicted sign, indicate a fairly sizeable effect, but whose CI creeps over 0.0.

Philosophically I like model averaging, and I also have many good models that often only differ by an extra covariate or an interaction.

EDIT: This inquiry can probably be summarized by asking "In an AICc model averaging framework how does one interpret parameters whose confidence intervals span zero by only a small amount?"

Best Answer

If you have read Burnham & Anderson's monograph, you know just why they discourage AIC(c)-based model selection: because they subscribe to the theory of tapering effect sizes. In a nutshell, they posit that everything has an effect - it's just that most effects are pretty small (sort of a "long tail"). Thus, an AIC(c)-selected model may be more parsimonious, but it will be systematically too small (the bias-variance trade-off). Therefore they recommend averaging models.

This is also the reason why statistical significance and p values are not en vogue in the Burnham & Anderson worldview. Tapering effect sizes are another way of saying that the true coefficients are almost always nonzero, just perhaps very small. Thus, the null hypothesis is already false a priori. P values pose a question that we already know the answer to.

Thus, if you follow B&A's philosophy far enough that you do AICc-based model averaging, it seems a bit incongruous to also discuss p values and/or "marginal significance".

Now, one possibility would be to simply discuss "averaged coefficients" and their CIs, without even discussing whether CIs contain zero. Conversely, if you are in a field that deifies p values (like psychology), it may make more sense to disregard these implications of B&A in the interest of talking in a way your readers will understand, rather than follow strict AICc purity.

(Anyway, my impression is that AICc and B&A have more of a following among non-statisticians, especially ecologists. So the nuances we are discussing here may already be far away from your readership's main interests.)

Related Question