I've been reading a bit on boosting algorithms for classification tasks and Adaboost in particular. I understand that the purpose of Adaboost is to take several "weak learners" and, through a set of iterations on training data, push classifiers to learn to predict classes that the model(s) repeatedly make mistakes on. However, I was wondering why so many of the readings I've done have used decision trees as the weak classifier. Is there a particular reason for this? Are there certain classifiers that make particularly good or bad candidates for Adaboost?
Adaboost – Why Choose Adaboost with Decision Trees?
algorithmsboostingclassificationmachine learning
Best Answer
I talked about this in an answer to a related SO question. Decision trees are just generally a very good fit for boosting, much more so than other algorithms. The bullet point/ summary version is this: