Solved – AIC versus Likelihood Ratio Test in Model Variable Selection

aicfeature selectionlikelihood-ratiomodel selection

The software that I am currently using to build a model compares a "current run" model to a "reference model" and reports (where applicable) both a chi-squared p-value based on likelihood ratio tests and AIC values for each model. I know that one advantage of AIC over likelihood ratio tests is that AIC can be compared on non-nested models. However, I am not aware of any reason why AIC couldn't or shouldn't be compared on nested models. In my model, when comparing nested models for variable selection, I'm finding several cases where the likelihood ratio test and the AIC comparison are suggesting opposite conclusions.

Since both are based on likelihood calculations, I'm struggling to interpret these results. However the documentation of my software says (without explaining),

"If two models are nested (i.e. one is a sub-set of the other) then the more usual chi-squared test is the most appropriate to use. If the models are not nested, the AIC can be used…"

Can anyone elaborate on this and/or explain why AIC is not as helpful as likelihood ratio tests on nested models?

Best Answer

AIC and likelihood ratio test (LRT) have different purposes.

  • AIC tells you whether it pays to have a richer model when your goal is to approximate the underlying data generating process the best you can in terms of Kullback-Leibler distance.
  • LRT tells you whether at a chosen confidence level you can reject the hypothesis that some restrictions on the richer model hold (e.g. some elements in the richer model are redundant).

You would use AIC if your goal is model selection for forecasting. You would use likelihood ratio test for significance testing. Different goals call for different tools.