Solved – Concept of iterations in Adaboost

adaboostboostingclassificationensemble learning

I can't seem to get my head around "iterations" in Adaboost.

Are they analogous to weak classifiers that are used for Boosting?

I've seen many examples of Adaboost where a programmers use a

  1. Single Classifier multiple Times (initialized with different parameters resulting in different classifiers with different parameters).
  2. Multiple Weak Classifiers.

How does "iterations" fit in with above mentioned cases?

Best Answer

Yes, the number of iterations and the number of weak classifiers in the final strong classifier are the same.

Here is how AdaBoost works:

Initialize the weights of data points
FOR t=1 from N, do
    Train/choose/select a weak classifier (that does better than chance)
    Compute the classification error of the just selected weak classifier for each data point
    Increase the weights of the data points that are wrongly classified
    Normalize weights
END
Output strong classifier as a linear combination of N weak classifiers

The block of code within the for-loop is called an iteration in AdaBoost.

Related Question