Solved – Show How to Generate a Poisson Random Variable with Parameter $\lambda$ using Metropolis-Hastings

bayesianmarkov-processmetropolis-hastingsmonte carlostochastic-processes

Additionally: Use a simple symmetric random walk as the proposal distribution.
Source: "Introduction to Stochastic Processes with R" – Robert P. Dobrow, Chapter 5 Exercises: Question 5.6


I know this is typically done in software but I want to comprehend the method by hand. I feel very uncomfortable with my comprehension of Bayesian Statistics:

I have $u$ ~ $Uniform(0,1)$

My understanding is that the above is my prior and the likelihood is the density function I want to derive samples from.

But how do I create my acceptance function? My professor presented it as having a random walk starting at state $k=0$, and moving to $k=1$ if $u < \lambda$ and then moving left or right with probability $.5$ (he says just flip a coin. Basically if heads move right, if tails move left).

Then move to the proposal state with the following criteria:

Move to k-1 if,

$$u< \frac{\frac{e^{-\lambda}\lambda^{k-1}}{(k-1)!}}{\frac{e^{-\lambda}\lambda^{k}}{k!}} = \frac{\lambda}{k+1}$$

Similar procedure for moving to (k+1).

What confuses me is how to tie this into the the acceptance probability that is always taught when learning Metropolis-Hastings. Namely,

$$a(i,j) = \frac{\pi_{j} T_{ji}}{\pi_{i}T_{ij}}$$

Sorry if this question is messy. I've been spending so much time on this and am still struggling to wrap my mind around it. Thank you for your answer in advance! I appreciate all help!

Best Answer

Assume $T_{00}=1-p=1-T_{01}$, where $T$ is the transition matrix of the random walk Markov chain. Let $\pi_i=\frac{\lambda^i}{i!}e^{-\lambda}$ for $i=0,1,\ldots$. Assume WLOG that $X_0=0$ and define

$$ a(i,j) = \frac{\pi_j T_{ji}}{\pi_i T_{ij}} = \begin{cases} 1,& i=j=0;\\ \frac{ip}{\lambda(1-p)},& j=i-1\\ \frac{\lambda(1-p)}{(i+1)p},& j=i+1. \end{cases} $$ Let $U_{n+1}\sim \mathsf{Unif}(0,1)$ and set $$ X_{n+1} = \begin{cases} j,& U\leqslant a(i,j)\\ i,& U>a(i,j). \end{cases} $$ Then $\{X_n :n=0,1,\ldots\}$ is a reversible Markov chain with stationary distribution $\pi$, as desired. Here's some $\texttt{R}$ code I wrote to simulate this:

lambda <- 2.5

p <- 0.45

N <- 10000

X <- rep(as.numeric(0),N)
a <- rep(as.numeric(0),N)

for(n in 1:N) {
  if(X[n] == 0) {
    X[n+1] <- X[n] + rbinom(1,1,p)
  }
  else {
    X[n+1] <- X[n] + 2 *(rbinom(1,1,p) - 0.5)
  }

  if(X[n+1] == X[n])
    a[n] <- 1
  else if(X[n+1] == X[n] - 1)
    a[n] <- p *X[n]/(lambda*(1-p))
  else
    a[n] <- lambda*(1-p)/(X[n+1]*p)

  if(a[n] < runif(1,0,1)) {
    X[n+1] <- X[n]
  }
}

plot(ecdf(X), lwd=2, col="blue")
lines(0:9, ppois(0:9,lambda), col="red", type="l", lwd=2, lty=3)
Related Question