Solved – Gibbs sampling versus general MH-MCMC

bayesiangibbsmarkov-chain-montecarlometropolis-hastingssampling

I have just been doing some reading on Gibbs sampling and Metropolis Hastings algorithm and have a couple of questions.

As I understand it, in the case of Gibbs sampling, if we have a large multivariate problem, we sample from the conditional distribution i.e. sample one variable while keeping all others fixed whereas in MH, we sample from the full joint distribution.

One thing the document said was that the proposed sample is always accepted in Gibbs Sampling i.e. the proposal acceptance rate is always 1. To me this seems like a big advantage as for large multivariate problems it seems that the rejection rate for MH algorithm becomes quite large. If that is indeed the case, what is the reason behind not using Gibbs Sampler all the time for generating the posterior distribution?

Best Answer

the main rationale behind using the Metropolis-algorithm lies in the fact that you can use it even when the resulting posterior is unknown. For Gibbs-sampling you have to know the posterior-distributions which you draw variates from.