Solved – Does there exist a model fit statistic (like AIC or BIC) that can be used for absolute instead of just relative comparisons

aicbicmodel selection

I'm not that familiar with this literature, so please forgive me if this is an obvious question.

Since AIC and BIC depend on maximizing the likelihood, it seems that they can only be used to make relative comparisons between a set of models attempting to fit a given data-set. According to my understanding, it wouldn't make sense to calculate the AIC for Model A on data-set 1, calculate the AIC for Model B on data-set 2, and then compare the two AIC values and judge that (for example) Model A fits data-set 1 better than Model B fits data-set 2. Or perhaps I am mistaken and that is a reasonable thing to do. Please let me know.

My question is this: does there exist a model fit statistic that can be used for absolute instead of just relative comparisons? For linear models, something like $R^2$ would work; it has a defined range and discipline specific ideas about what is a "good" value. I'm looking for something more general and thought that I could start by pinging the experts here. I'm sure that someone has thought of this kind of thing before, but I don't quite know the right terms to make a productive search on Google Scholar.

Any help would be appreciated.

Best Answer

In line with what Macro suggested I think the term you are looking for is a performance measure. Though it is not a safe way to asses predictive power, it is a very usefull way to compare the fitting quality of various models.

An example measure would be the Mean Average Percentage Error, but more of them can easily be found.

Suppose you use SetA with modelA to describe the number of holes in a road, and you use SetB and modelB to describe the number of people in a country, then of course you cannot say that one model is better than the other, but you can at least see which model provides a more accurate description.

Related Question