Solved – causing autocorrelation in MCMC sampler

autocorrelationbayesiancorrelationjagsmarkov-chain-montecarlo

When running a Bayesian analysis, one thing to check is the autocorrelation of the MCMC samples. But I don't understand what is causing this autocorrelation.

Here, they are saying that

High autocorrelation samplesĀ [from MCMC] often are caused by strong correlations among variables.

  1. I'm wondering what are other causes of high autocorrelation samples in MCMC.

  2. Is there a list of things to check when autocorrelation is observed in a JAGS output?

  3. How can we manage autocorrelation in a Bayesian analysis? I know that some are saying to thin, but others are saying that it's bad. Running the model for a longer period is another solution, unfortunately costly in time and still affecting in some cases the trace of the samples in the MCMC. Why some algorithm are much more effective in exploring and being uncorrelated? Should we change the initial values for the chain to begin with?

Best Answer

When using Markov chain Monte Carlo (MCMC) algorithms in Bayesian analysis, often the goal is to sample from the posterior distribution. We resort to MCMC when other independent sampling techniques are not possible (like rejection sampling). The problem however with MCMC is that the resulting samples are correlated. This is because each subsequent sample is drawn by using the current sample.

There are two main MCMC sampling methods: Gibbs sampling and Metropolis-Hastings (MH) algorithm.

  1. Autocorrelation in the samples is affected by a lot of things. For example, when using MH algorithms, to some extent you can reduce or increase your autocorrelations by adjusting the step size of proposal distribution. In Gibbs sampling however, there is no such adjustment possible. The autocorrelation is also affected by starting values of the Markov chain. There is generally an (unknown) optimum starting value that leads to the comparatively less autocorrelation. Multi-modality of the target distribution can also greatly affect the autocorrelation of the samples. Thus there are attributes of the target distribution that can definitely dictate autocorrelation. But most often autocorrelation is dictated by the sampler used. Broadly speaking if an MCMC sampler jumps around the state space more, it is probably going to have smaller autocorrelation. Thus, it is often desirable to choose samplers that make large accepted moves (like Hamiltonian Monte Carlo).
  2. I am unfamiliar with JAGS.
  3. If you have decided on the sampler already, and do not have the option of playing around with other samplers, then the best bet would be to do some preliminary analysis to find good starting values and step sizes. Thinning is not generally suggested since it is argued that throwing away samples is less efficient than using correlated samples. A universal solution is to run the sampler for a long time, so that you Effective Sample Size (ESS) is large. Look at the R package mcmcse here. If you look at the vignette on Page 8, the author proposes a calculation of the minimum effective samples one would need for their estimation process. You can find that number for your problem, and let the Markov chain run until you have that many effective samples.
Related Question