Solved – Convergence of the Independent Metropolis-Hastings algorithm

markov-chain-montecarlometropolis-hastingsreferences

I am interested in the convergence properties of the Metropolis-within-Gibbs sampler with Independent or Random walk. In this paper, I have read that in the case of an Independent walk, the proposal distribution must satisfy an inequality of the form $p(z) \geq \varepsilon \pi(z)$ (where $p$ denotes the proposal distribution and $\pi$ the target distribution) to ensure that the convergence of the sampler to the stationary distribution is uniformly ergodic. If the proposal distribution does not satisfy this condition, the sampler can have horrible convergence properties.

However, in the paper mentioned above, the condition is given for an adaptive Independent Metropolis-Hastings algorithm. Do you know of a reference which states this result for a classical Independent Metropolis-Hastings algorithm ? More precisely, how horrible can be the convergence of the sampler if the proposal does not satisfy this inequality ? And lastly : is there a similar condition for the Random walk Metropolis-Hastings algorithm ?

Best Answer

A more relevant paper about the convergence of Metropolis-Hastings algorithms is the one by Mengersen and Tweedie (1996) since it is both quite readable and general. In this paper, two major results can be singled out:

  1. The independent Metropolis-Hastings algorithm with target $p$ and proposal $q$ leads to a uniformly ergodic Markov chain when $p/q$ is bounded;
  2. In the case of a target with a non-compact support, the random walk Metropolis-Hastings algorithm cannot produce a uniformly ergodic Markov chain. There exist some conditions under which the Markov chain is geometrically ergodic.

If you want a deeper entry on convergence properties of Metropolis-Hastings algorithms, the series of papers written by Gareth Roberts (Warwick) and Jeff Rosenthal (Toronto) contain a wealth of results.

Related Question