What are the differences of modeling ability between Variational Auto-encoders (VAEs) and Restricted Boltzmann Machines (RBMs)?
What I am interested in is to know about the unsupervised learning power differences.
Where exactly VAEs work better than RBMs and vise versa?
Also what RBMs can capture that VAEs can not?
I am already aware of this post and I know the concept of VAEs and RBMs. Also this one is very brief and is not what I am looking for.
Does anybody know about any (theoretical) limitations on the modeling power for one that makes another superior?
Best Answer
Theoretically RBMs are undirected models with no connections between any observed variables or any latent variables, VAEs are directed models with continuous latent variables.
The Deep Learning book (Chapter 20 Deep Generative Models) provides a very good summary of pros and cons of VAEs and all types of RBMs.
To quote some