I'd try 'folding in'. This refers to taking one new document, adding it to the corpus, and then running Gibbs sampling just on the words in that new document, keeping the topic assignments of the old documents the same. This usually converges fast (maybe 5-10-20 iterations), and you don't need to sample your old corpus, so it also runs fast. At the end you will have the topic assignment for every word in the new document. This will give you the distribution of topics in that document.
In your Gibbs sampler, you probably have something similar to the following code:
// This will initialize the matrices of counts, N_tw (topic-word matrix) and N_dt (document-topic matrix)
for doc = 1 to N_Documents
for token = 1 to N_Tokens_In_Document
Assign current token to a random topic, updating the count matrices
end
end
// This will do the Gibbs sampling
for doc = 1 to N_Documents
for token = 1 to N_Tokens_In_Document
Compute probability of current token being assigned to each topic
Sample a topic from this distribution
Assign the token to the new topic, updating the count matrices
end
end
Folding-in is the same, except you start with the existing matrices, add the new document's tokens to them, and do the sampling for only the new tokens. I.e.:
Start with the N_tw and N_dt matrices from the previous step
// This will update the count matrices for folding-in
for token = 1 to N_Tokens_In_New_Document
Assign current token to a random topic, updating the count matrices
end
// This will do the folding-in by Gibbs sampling
for token = 1 to N_Tokens_In_New_Document
Compute probability of current token being assigned to each topic
Sample a topic from this distribution
Assign the token to the new topic, updating the count matrices
end
If you do standard LDA, it is unlikely that an entire document was generated by one topic. So I don't know how useful it is to compute the probability of the document under one topic. But if you still wanted to do it, it's easy. From the two matrices you get you can compute $p^i_w$, the probability of word $w$ in topic $i$. Take your new document; suppose the $j$'th word is $w_j$. The words are independent given the topic, so the probability is just $$\prod_j p^i_{w_j}$$ (note that you will probably need to compute it in log space).
For my own curiosity, I applied a clustering algorithm that I've been working on to this dataset.
I've temporarily put-up the results here (choose the essays dataset).
It seems like the problem is not the starting points or the algorithm, but the data. You can 'reasonably' (subjectively, in my limited experience) get good clusters even with 147 instances as long as there is some hidden topics/concepts/themes/clusters (whatever you would like to call).
If the data does not have well separated topics, then no matter whichever algorithm you use, you might not get good answers.
Best Answer
You have to train your model, get the topics distribution for both the corpus you want to compare and then you need to choose a metric to compare them. For example, the topic distributions are vectors, and you can use the euclidian distance between them as an indicator of the difference between the documents.
EDIT - (example)
With gensim, you'll have to do something like that: