Solved – Why is the variable importance metric suggested by Breiman specific only to random forests

importancemachine learningrandom forest

In the Random Forest paper they describe a nice way of measuring a variable importance – take your validation data, measure error rate, permute the variable and re-measure error rate.

Question – why is that method specific to Random Forests? I understand that in other classifiers (SVM, LR, etc.) we don't have the concept of OOB, but we certainly can use a regular train-validation split.

What am I missing here? Why isn't this method a common practice?

Best Answer

Any bagged learner can produce an analogue of Random Forests importance metric.

You can't get this kind of feature importance in a common cross-validation scheme, where all the features are used all the time.