Solved – Accuracy of classifiers with Adaboost

adaboostboostingclassificationensemble learning

Does Adaboost ensure that resultant accuracy is more than or at least equal to current accuracies?

What happens if Classifier A performs badly and the weights are accordingly updated and the next Classifier B performs very well (better than Classifier A) on the data. How does Adaboost handle this problem?

Best Answer

The answer to your first question is yes. In classical adaboost, if a newly added weak learner (e.g. ClassifierB) does not reduce the overall empirical classification error, the algorithm stops. So, the linear combination of weak learners should do at least as well as the best weak learner.

Your second question: adaboost is an additive (and sequential) model. That is, the choice of ClassifierB depends on the performance of the previously selected weak learners. If ClassifierA performs badly on specific data points, then Adaboost increases their weights so that the next weak learner (ClassifierB) tries to handle them more than the other data points.