When increasing the value of the latent variable k for LDA (latent Dirichlet allocation), how should perplexity behave:
- On the training set?
- On the testing set?
latent-dirichlet-alloclatent-variableperplexity
When increasing the value of the latent variable k for LDA (latent Dirichlet allocation), how should perplexity behave:
Best Answer
The original paper on LDA gives some insights into this:
This should be the behavior on test data. Here is a result from paper: