The question is:
What is the difference between classical k-means and spherical k-means?
Classic K-means:
In classic k-means, we seek to minimize a Euclidean distance between the cluster center and the members of the cluster. The intuition behind this is that the radial distance from the cluster-center to the element location should "have sameness" or "be similar" for all elements of that cluster.
The algorithm is:
- Set number of clusters (aka cluster count)
- Initialize by randomly assigning points in the space to cluster indices
- Repeat until converge
- For each point find the nearest cluster and assign point to cluster
- For each cluster, find the mean of member points and update center mean
- Error is norm of distance of clusters
Spherical K-means:
In spherical k-means, the idea is to set the center of each cluster such that it makes both uniform and minimal the angle between components. The intuition is like looking at stars - the points should have consistent spacing between each other. That spacing is simpler to quantify as "cosine similarity", but it means there are no "milky-way" galaxies forming large bright swathes across the sky of the data. (Yes, I'm trying to speak to grandma in this part of the description.)
More technical version:
Think about vectors, the things you graph as arrows with orientation, and fixed length. It can be translated anywhere and be the same vector. ref
The orientation of the point in the space (its angle from a reference line) can be computed using linear algebra, particularly the dot product.
If we move all the data so that their tail is at the same point, we can compare "vectors" by their angle, and group similar ones into a single cluster.
For clarity, the lengths of the vectors are scaled, so that they are easier to "eyeball" compare.
You could think of it as a constellation. The stars in a single cluster are close to each other in some sense. These are my eyeball considered constellations.
The value of the general approach is that it allows us to contrive vectors which otherwise have no geometric dimension, such as in the tf-idf method, where the vectors are word frequencies in documents. Two "and" words added does not equal a "the". Words are non-continuous and non-numeric. They are non-physical in a geometric sense, but we can contrive them geometrically, and then use geometric methods to handle them. Spherical k-means can be used to cluster based on words.
So the (2d random, continuous) data was this:
$$
\begin{bmatrix}
x1&y1&x2&y2&group\\
0&-0.8&-0.2013&-0.7316&B\\
-0.8&0.1&-0.9524&0.3639&A\\
0.2&0.3&0.2061&-0.1434&C\\
0.8&0.1&0.4787&0.153&B\\
-0.7&0.2&-0.7276&0.3825&A\\
0.9&0.9&0.748&0.6793&C\\
\end{bmatrix}
$$
Some points:
- They project to a unit sphere to account for differences in document
length.
Let's work through an actual process, and see how (bad) my "eyeballing" was.
The procedure is:
- (implicit in the problem) connect vectors tails at origin
- project onto unit sphere (to account for differences in document length)
- use clustering to minimize "cosine dissimilarity"
$$ J = \sum_{i} d \left( x_{i},p_{c\left( i \right)} \right) $$
where
$$ d \left( x,p \right) = 1- cos \left(x,p\right) =
\frac{\langle x,p \rangle}{\left \|x \right \|\left \|p \right \|} $$
(more edits coming soon)
Links:
- http://epub.wu.ac.at/4000/1/paper.pdf
- http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.111.8125&rep=rep1&type=pdf
- http://www.cs.gsu.edu/~wkim/index_files/papers/refinehd.pdf
- https://www.jstatsoft.org/article/view/v050i10
- http://www.mathworks.com/matlabcentral/fileexchange/32987-the-spherical-k-means-algorithm
- https://ocw.mit.edu/courses/sloan-school-of-management/15-097-prediction-machine-learning-and-statistics-spring-2012/projects/MIT15_097S12_proj1.pdf
Best Answer
Tied distances are rather unusual. So even if the points are close, one will be nearer.
But lets look at what happens if we happened to draw two duplicate points as initial centers.
Then the distance of any point to these two will be the same.
Most implementations will assign all points to either the first or the last cluster. So what happens then? That center moves to the global data mean. The other one remains unchanged (usually: some implementations may fail on empty clusters, or decrease k, or draw a new center). At least two points (or initial centers) will be assigned to that mean on the next iteration, and the first center will thus move further away from these initial values. So we don't really have a problem here (well, some bad implementarions may have a division by zero ...), it just takes a bit longer to converge.
A slight problem may arise if our data is symmetric. Assume we have data points
We would intuitively like our centers to be something symmetric like 4 and 16 if we use k=2 (the optimum solution probably is 2 and 15.x, so not symmetric).
But if we draw 10 and 10 as initial cluster centers, we will usually get a result where all points are assigned to the first cluster, mean 10, and no points in the second cluster (mea remains at previous value 10).