Solved – How to compare the significance of two models from two different datasets

machine learningmodel selectionregressionstatistical significance

I have two different regression models which I learned from two different data sets.

Is there any statistical method which shows the significance of models based on the number of parameters and cross validation errors?

The problem is that I used two different data sets and I worry that it might not make sense to compare two different models from two different data sets.

Best Answer

As I don't have enough reputation to comment, I'd put it here as an 'answer'.

But first of all, I am not very sure what you're looking for. I'd try my best to guess what you mean and please correct me if I'm wrong.

You have two difference statistical models, and the two models were based on two difference data-sets. You want to compare them for statistical significance.

Short answer: no, you can't.

Long answer: Basically when you compare two models, you want to know which one have a greater explanatory / predictive power with reference to the same data-set. Although you can compare the AICs (asymptotically equal to cross-validation errors) or even the R^2 (for linear regressions) but it is almost impossible to interpret the comparison. IN addition, to my knowledge, most common statistical tests for model fitness are designed for nested models only, i.e. your model A is a subset of model B.