Top2vec – Effectively Normalizing Topic Vectors in Top2vec

dbscandoc2vecnatural languagepythontopic-models

I am trying to understand how Top2Vec works. I have some questions about the code that I could not find an answer for in the paper. A summary of what the algorithm does is that it:

  • embeds words and vectors in the same semantic space and normalizes them. This usually has more than 300 dimensions.
  • projects them into 5-dimensional space using UMAP and cosine similarity.
  • creates topics as centroids of clusters using HDBSCAN with Euclidean metric on the projected data.

what troubles me is that they normalize the topic vectors. However, the output from UMAP is not normalized, and normalizing the topic vectors will probably move them out of their clusters. This is inconsistent with what they described in their paper as the topic vectors are the arithmetic mean of all documents vectors that belong to the same topic.

This leads to two questions:

How are they going to calculate the nearest words to find the keywords of each topic given that they altered the topic vector by normalization?

After creating the topics as clusters, they try to deduplicate the very similar topics. To do so, they use cosine similarity. This makes sense with the normalized topic vectors. In the same time, it is an extension of the inconsistency that normalizing topic vectors introduced. Am I missing something here?

Best Answer

I got the answer to my questions from the source code. I was going to delete the question but I will leave the answer any way.

It is the part I missed and is wrong in my question. Topic vectors are the arithmetic mean of all documents vectors that belong to the same topic. Topic vectors belong to the same semantic space where words and documents vector live.

That is why it makes sense to normalize them since all words and documents vectors are normalized, and to use the cosine metric when looking for duplicated topics in the higher original semantic space.

Related Question