Solved – Are Restricted Boltzmann Machines better than Stacked Auto encoders and why

autoencodersdeep learningdeep-belief-networksneural networksrestricted-boltzmann-machine

So I'm learning about deep learning. I first learned about stacked auto-encoders and now I'm learning about Restricted Boltzmann Machines. However non in the papers/tutorials I read I found them motivating why would one want to use RBM instead of auto-encoders. So what are the advantages of RBM over stacked auto-encoders? And when should one use RBM or auto-encoders?

Best Answer

Auto-encoders typically feature many hidden layers. This causes a variety of problems for the common backpropagation-style training methods, because the backpropagated errors become very small in the first few layers.

A solution is to do pretraining, e.g. use initial weights that approximate the final solution. One pretraining technique treats a set of two layers like an RBM to obtain a good set of starting weights which are then fine-tuned using backpropagation. RBMs are useful here because contrastive divergence does not suffer from the same issues as backpropagation.