Solved – use output of classifier A as feature for classifier B

classificationensemble learningfeature selectionfeature-engineering

This is likely to be a confused question, but I'm curious if this is a valid way to combine classifiers.

I have a classification data set, i.e. column of labels and N columns of features, and I use a classifier (A) to generate a column of predicted labels. Can I turn around and use those labels as another feature for a second classifier (B)? That is, can I classify using classifier B with N+1 features, where the +1 feature is the output of classifier A? (Question 1)

A similar question was asked here about neural networks, but I think the answer was geared to unsupervised learning. I'm wondering if this is a valid way to combine classifiers for supervised learning, and why / why not.

If it is valid, is there any advantage / disadvantage to this "sequential" combination compared to simply combining the predictions of classifiers A and B "after the fact"? That is, predict labels with A, predict labels with B, and average or otherwise combine the predictions. (Question 2)

Best Answer

Yes, using outputs of one model as inputs of another is possible, and this concept is used to some extent in some approaches.

If you stack models on top of each other, in the application case the model at position $n$ in the chain essentially does preprocessing/feature derivation for the model at position $n+1$. One advantage of this is that more preprocessing and feature derivation conceptually increases the power of whole setup (lets just assume it's this way for simplicity) - hence more complex problems can be solved with such chains. But this will also make training more complex. The question will usually be: how to train this chain of models? Both at the same time? One after another? If supervised - which target variable to use for models not at last position in the chain?

Such thoughts lead to e.g. what we currently see in deep learning: one core concept of deep neural networks (deep nets) is that there are many layers, where each layer is an additional feature preprocessing/feature derivation that is embedded right into the model (therefore increases the models power). Conceptually, all layers could be trained together - which would lead to straight, supervised learning. But in practice, complexity issues usually prevent this. This is why some deep nets learn some layers individually, some learn parts in supervised manner, some in unsupervised manner, and most mix those concepts. Consider e.g. the learning process and information flow in deep believe networks, convolutional neural nets, or deep autoencoders. My guess is that those might get pretty close to what you had in mind when asking your question.

Sidenote: you might also want to look into the concept of boosting if you are not familiar with it yet. Boosting does not "chain" any models, but it uses the training error of one model to influence training of subsequent models - which turned out to be very effective in the past. Boosting in a nutshell: it assigns a weight to each sample at the start of training. After a model is trained and evaluated, samples that were classified wrong get their weights increased, while sample that were classified correctly get their weight decreased. The subsequent model uses those new weight in training, therefore lays more weight on samples that were classified wrong before. This process is repeated, and this way each model is influenced by its predecessor and focuses on the part of data that is "currently difficult to classify".