I'd try 'folding in'. This refers to taking one new document, adding it to the corpus, and then running Gibbs sampling just on the words in that new document, keeping the topic assignments of the old documents the same. This usually converges fast (maybe 5-10-20 iterations), and you don't need to sample your old corpus, so it also runs fast. At the end you will have the topic assignment for every word in the new document. This will give you the distribution of topics in that document.
In your Gibbs sampler, you probably have something similar to the following code:
// This will initialize the matrices of counts, N_tw (topic-word matrix) and N_dt (document-topic matrix)
for doc = 1 to N_Documents
for token = 1 to N_Tokens_In_Document
Assign current token to a random topic, updating the count matrices
end
end
// This will do the Gibbs sampling
for doc = 1 to N_Documents
for token = 1 to N_Tokens_In_Document
Compute probability of current token being assigned to each topic
Sample a topic from this distribution
Assign the token to the new topic, updating the count matrices
end
end
Folding-in is the same, except you start with the existing matrices, add the new document's tokens to them, and do the sampling for only the new tokens. I.e.:
Start with the N_tw and N_dt matrices from the previous step
// This will update the count matrices for folding-in
for token = 1 to N_Tokens_In_New_Document
Assign current token to a random topic, updating the count matrices
end
// This will do the folding-in by Gibbs sampling
for token = 1 to N_Tokens_In_New_Document
Compute probability of current token being assigned to each topic
Sample a topic from this distribution
Assign the token to the new topic, updating the count matrices
end
If you do standard LDA, it is unlikely that an entire document was generated by one topic. So I don't know how useful it is to compute the probability of the document under one topic. But if you still wanted to do it, it's easy. From the two matrices you get you can compute $p^i_w$, the probability of word $w$ in topic $i$. Take your new document; suppose the $j$'th word is $w_j$. The words are independent given the topic, so the probability is just $$\prod_j p^i_{w_j}$$ (note that you will probably need to compute it in log space).
The approach termed "Type 1" is already explored in the paper by Blei et al. (2003) http://jmlr.org/papers/volume3/blei03a/blei03a.pdf in §7.2. The result is that this approach is valuable in feature selection for the SVM.
So "Type 1" is definitely one right way.
I have no comments "Type 2" except that it lacks the clarity that "Type 1" has.
Best Answer
There's nothing stopping you, but this essentially reduces to learning a bag of words model for each label, albeit with a shared prior in the form of $\eta$. The new model would look like this:
To see why these are equivalent, see this snippet from the labelled LDA paper:
If the document has only a single—and importantly, observed—label, its topic assignment is limited to the corresponding topic, and all its words are generated from the same multinomial distribution. (This is because $\Lambda(d)$ will ensure that only one value of $\theta^{(d)}$ is nonzero.)
It bears superficial resemblance to a mixture of unigrams, where each document is produced by a single topic. But in that model the topic is a latent variable, and in your case it's observed. Cf. the mixture of unigrams model as described in the original LDA paper:
There's nothing wrong with bag of words, but it's worth noting that the paper introducing LDA demonstrated better performance, in terms of perplexity, in two experiments. (Figure 9.)
To the question about classification, sure: Each bag of words model will give you a likelihood for the document, and you can use this along with a prior on topics to find $p(z|\textbf{w})$ using Bayes rule. (If your prior on topics is uniform, this is equivalent to maximum likelihood.)
You've only asked whether this is possible, and not about likely performance. But for what it's worth, my intuition is that you'll get better predictive performance with regular LDA and a subsequent classifier. When in doubt, cross validate.