Solved – K-means++ like initialization for K-medoids

clusteringk medoidsk-means

Does it make sense to use initialization in K-medoids like in the case of K-means++?

To be precise – is it good to select "farthest" points as initial medoids? (farthest in sense that points that are further from each other have greater probability to be selected as initial medoids).

I think that it makes sense, but I would like a confirmation.

Best Answer

If you are talking about the PAM algorithm remember that it has a BUILD phase already implemented, and as said by Anony-Mousse in comments it is the major part of the algorithm. The SWAP phase is a refinement of the initial solution that scales badly with the number of objects.

You nevertheless can try to implement a randomized version of the BUILD step in which instead of selecting the objects that minimize the total dissimilarity of each object from its nearest representer you randomly sample them according to a probability distribution proportional to that total dissimilarity.

Anyway it is hard to tell a priori if this can be a better initialization step. You should test both initializations, or try to develop a proof (if you dare!) that the random initializations lead on average to better solutions, or construct lower bounds as in k-means++

Related Question