The question is:
What is the difference between classical k-means and spherical k-means?
Classic K-means:
In classic k-means, we seek to minimize a Euclidean distance between the cluster center and the members of the cluster. The intuition behind this is that the radial distance from the cluster-center to the element location should "have sameness" or "be similar" for all elements of that cluster.
The algorithm is:
- Set number of clusters (aka cluster count)
- Initialize by randomly assigning points in the space to cluster indices
- Repeat until converge
- For each point find the nearest cluster and assign point to cluster
- For each cluster, find the mean of member points and update center mean
- Error is norm of distance of clusters
Spherical K-means:
In spherical k-means, the idea is to set the center of each cluster such that it makes both uniform and minimal the angle between components. The intuition is like looking at stars - the points should have consistent spacing between each other. That spacing is simpler to quantify as "cosine similarity", but it means there are no "milky-way" galaxies forming large bright swathes across the sky of the data. (Yes, I'm trying to speak to grandma in this part of the description.)
More technical version:
Think about vectors, the things you graph as arrows with orientation, and fixed length. It can be translated anywhere and be the same vector. ref

The orientation of the point in the space (its angle from a reference line) can be computed using linear algebra, particularly the dot product.
If we move all the data so that their tail is at the same point, we can compare "vectors" by their angle, and group similar ones into a single cluster.

For clarity, the lengths of the vectors are scaled, so that they are easier to "eyeball" compare.

You could think of it as a constellation. The stars in a single cluster are close to each other in some sense. These are my eyeball considered constellations.

The value of the general approach is that it allows us to contrive vectors which otherwise have no geometric dimension, such as in the tf-idf method, where the vectors are word frequencies in documents. Two "and" words added does not equal a "the". Words are non-continuous and non-numeric. They are non-physical in a geometric sense, but we can contrive them geometrically, and then use geometric methods to handle them. Spherical k-means can be used to cluster based on words.
So the (2d random, continuous) data was this:
$$
\begin{bmatrix}
x1&y1&x2&y2&group\\
0&-0.8&-0.2013&-0.7316&B\\
-0.8&0.1&-0.9524&0.3639&A\\
0.2&0.3&0.2061&-0.1434&C\\
0.8&0.1&0.4787&0.153&B\\
-0.7&0.2&-0.7276&0.3825&A\\
0.9&0.9&0.748&0.6793&C\\
\end{bmatrix}
$$
Some points:
- They project to a unit sphere to account for differences in document
length.
Let's work through an actual process, and see how (bad) my "eyeballing" was.
The procedure is:
- (implicit in the problem) connect vectors tails at origin
- project onto unit sphere (to account for differences in document length)
- use clustering to minimize "cosine dissimilarity"
$$ J = \sum_{i} d \left( x_{i},p_{c\left( i \right)} \right) $$
where
$$ d \left( x,p \right) = 1- cos \left(x,p\right) =
\frac{\langle x,p \rangle}{\left \|x \right \|\left \|p \right \|} $$
(more edits coming soon)
Links:
- http://epub.wu.ac.at/4000/1/paper.pdf
- http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.111.8125&rep=rep1&type=pdf
- http://www.cs.gsu.edu/~wkim/index_files/papers/refinehd.pdf
- https://www.jstatsoft.org/article/view/v050i10
- http://www.mathworks.com/matlabcentral/fileexchange/32987-the-spherical-k-means-algorithm
- https://ocw.mit.edu/courses/sloan-school-of-management/15-097-prediction-machine-learning-and-statistics-spring-2012/projects/MIT15_097S12_proj1.pdf
First, to be clear, the term "centroid" is just another way of saying the "mean". K-means clustering, when performed in accordance to its definition, should always be redefining the mean of the cluster exactly as the real mean of all the data points that were categorized into that cluster on that iteration. It seems like your point of confusion is on the re-classification step, but I'm not completely sure what your question is.
Yes, when re-classifying, you are using the mean of one distinct set of points in order to define a cluster of another distinct set of points - whose mean will likely be different from the mean of the first set of points. But these centroids/means are always the correct "real" mean of a particular set of points (after all, that's how you find these points), except perhaps your first assignments.
I hope that helps, and that I'm not just restating a bunch of stuff you may already know. Perhaps try to clarify your question a little more.
There are other algorithms similar to k-means that don't always necessarily use the means - like k-medians, for example.
Best Answer
The paper (see section 2.2) suggests that you use the squared distance when computing probabilities. In fact, you can try distance$^{\ell}$ using any exponent $\ell$ greater than 1. It stands to reason that as $\ell$ increases, the likelihood of initial centroids being close together will go to zero.