I have been trying to learn MCMC methods and have come across Metropolis-Hastings, Gibbs, Importance, and Rejection sampling. While some of these differences are obvious, i.e., how Gibbs is a special case of Metropolis-Hastings when we have the full conditionals, the others are less obvious, like when we want to use MH within a Gibbs sampler, etc. Does anyone have a simple way to see the bulk of the differences between each of these?
Solved – the difference between Metropolis-Hastings, Gibbs, Importance, and Rejection sampling
gibbsimportance-samplingmarkov-chain-montecarlometropolis-hastingsmonte carlo
Best Answer
As detailed in our book with George Casella, Monte Carlo statistical methods, these methods are used to produce samples from a given distribution, with density $f$ say, either to get an idea about this distribution, or to solve an integration or optimisation problem related with $f$. For instance, to find the value of $$\int_{\mathcal{X}} h(x) f(x)\text{d}x\qquad h(\mathcal{X})\subset \mathbb{R}$$ or the mode of the distribution of $h(X)$ when $X\sim f(x)$ or a quantile of this distribution.
To compare the Monte Carlo and Markov chain Monte Carlo methods you mention on relevant criteria requires one to set the background of the problem and the goals of the simulation experiment, since the pros and cons of each will vary from case to case.
Here are a few generic remarks that most certainly do not cover the complexity of the issue:
In conclusion, a warning that there is no such thing as an optimal simulation method. Even in a specific setting like approximating an integral $$\mathcal{I}=\int_{\mathcal{X}} h(x) f(x)\text{d}x\,,$$ costs of designing and running different methods intrude as to make a global comparison very delicate, if at all possible, while, from a formal point of view, they can never beat the zero variance answer of returning the constant "estimate" $$\hat{\mathcal{I}}=\int_{\mathcal{X}} h(x) f(x)\text{d}x$$ For instance, simulating from $f$ is very rarely if ever the best option. This does not mean that methods cannot be compared, but that there always is a possibility for an improvement, which comes with additional costs.