Model Selection – Best Practices for Non-Nested Model Selection

aiclikelihood-rationested-models

Both the likelihood ratio test and the AIC are tools for choosing between two models and both are based on the log-likelihood.

But, why the likelihood ratio test can't be used to choose between two non-nested models while AIC can?

Best Answer

The LR (likelihood ratio) test actually is testing the hypothesis that a specified subset of the parameters equal some pre-specified values. In the case of model selection, generally (but not always) that means some of the parameters equal zero. If the models are nested, the parameters in the larger model that are not in the smaller model are the ones being tested, with values specified implicitly by their exclusion from the smaller model. If the models aren't nested, you aren't testing this any more, because BOTH models have parameters that aren't in the other model, so the LR test statistic doesn't have the asymptotic $\chi^2$ distribution that it (usually) does in the nested case.

AIC, on the other hand, is not used for formal testing. It is used for informal comparisons of models with differing numbers of parameters. The penalty term in the expression for AIC is what allows this comparison. But no assumptions are made about the functional form of the asymptotic distribution of the differences between the AIC of two non-nested models when doing the model comparison, and the difference between two AICs is not treated as a test statistic.

I'll add that there is some disagreement over the use of AIC with non-nested models, as the theory is worked out for nested models. Hence my emphasis on "not...formal" and "not...test statistic." I use it for non-nested models, but not in a hard-and-fast way, more as an important, but not the sole, input into the model building process.