Metropolis Hastings Algorithm on Markov chain

graph theorymarkov chainsmarkov-process

Consider a finite, undirected, connected graph $G = (V,E)$ with vertex set $V$ and edge set $E$. We do not know the length of $|V |$ of the vertices of $G$, nor its structure. We can only see local information about $G$. E.g. if we are at a vertex $x \in V$ we can see the neighbors of $x$, i.e. the vertices $y \in V$ for which $(x,y) \in E$, as well as how many neighbors $x$'s neighbors have. Let us denote by $d(x)$ and let us call the degree of $x \in V$ the number of neighbors of $x$.

1) Compute the transition probabilities of the chain $\{X_n\} n \in N$ of the Metropolis-Hastings algorithm simulating the uniform distribution in $V$ using those of the random walk in $G$ as the proposal transition probabilities.

2) Show that, if there are two vertices with different degrees, then the chain is aperiodic and

$$\lim_{n\to\infty}\mathbb{P}[\mathbb{X}_n=x] =\frac{1}{|V|}$$

for each x ∈ V and any initial distribution (therefore in time it is equally likely to find the chain at any vertex).

My effort :

Let $\{X_n\}_n$ be a walk on the vertices $V$ of this connected, undirected graph $G$. At each step, the walker randomly selects one of the edges of the vertex it is on and moves to the other vertex of this edge.
If the walker is at a vertex $x \in V$ of degree $d(x)$, then all vertices $d(x)$ connected to the vertex $x$ are equally likely as the next destination. Therefore, if $E$ is the set of edges of the graph, the transition probabilities of the chain are given by:
$$p(x,y) =
\left\{\begin{array}{lr}
\frac{1}{d(x)}, & \text{if}\quad (x,y) \in E \\
0, & \text{if} \quad (x,y) \notin E \\
\end{array}\right\} $$

Because the graph is connected we can assume that $\pi(x) > 0$ for every $x \in V$. Considering the random walk in $V$ with transition probabilities $r(x, y)$ from vertex $x$ to vertex $y$. If at some point we are in the vertex $x \in V$, we can choose the candidate (proposal) next vertex $y$ with probability $r(x,y)$. Then we decide whether to switch to $y$ with probability equal to
$$\frac{\pi(y)r(y, x)}{\pi(x)r(x, y)} ∧ 1 = \min \{ \frac{\pi(y)r(y, x)}{ \pi(x)r(x, y)} , 1\}$$

The probability of transition from $x$ to $y \neq x$ is the product of the probability of choosing $y$ as a candidate next vertex times the probability of making the transition, given that we choose the $y$. Specifically:

$$p(x,y) = r(x,y)\left(\frac{\pi(y)r(y, x)}{\pi(x)r(x, y)} ∧ 1 \right),y \neq x$$

From here how can I continue ?

About the second question: In order the markov chain to be aperiodic must the period of the chain to be 1. But the fact that these two vertices have different degree confuses me. One thought is that there are two vertices A and B and B contains a loop like the picture below:

enter image description here

For a vertex $x \in V$ we define the the set of all possible return times in $x$ :

$$R(x) = \{n \in \mathbb{N}:p^n(x,x)>0 \}$$

where $$p^{(n)}(x,x)=\mathbb{P}[X_n=x|X_0=x].$$

The great common divisor of the set $R(x)$ is the period of the vertex $x$ and we can write it as $\psi(x)$.The special case of $\psi(x)=1$ is aperiodic.

In the above picture (my thought) the transition probability matrix is :

$$P = \begin{pmatrix}
0 & 1 \\
1/2 & 1/2
\end{pmatrix}$$

So we have two vertices but with different degree.The chain is irreducible and because there exists $p(x,x) > 0$ the chain is aperiodic.

About the second question I cannot show how $\lim_{n\to\infty}\mathbb{P}[\mathbb{X}_n=x] =\frac{1}{|V|}$ because since it is irreducible and aperiodic then the stationary distribution is the limiting distribution but it's not $\frac{1}{|V|} = \frac{1}{2}$ is $\pi(x) = \{ \frac{1}{3},\frac{2}{3}\}$

Any help ?

Best Answer

I like the way this material is presented in Dobrow book:

Dobrow, R. P. (2016). Introduction to stochastic processes with R. John Wiley & Sons.

  1. Choose a new state according to probability $p(x,y)$

  2. Decide whether to accept or not according to $a(x,y)$,

$$a(x,y)=\frac{\pi(y) p(y,x)}{\pi(x) p(x,y)} = \frac{d(x)}{d(y) }$$

If $a(x,y) < 1$ the new state is accepted with probability $a(x,y)$.

Intuitively, if the degree of the current state is smaller than the degree of the upcoming state, you may have to drop some candidate transitions as otherwise you would not get uniform steady state probabilities across all states.

$$X_{n+1}= \begin{cases} y, & \textrm{if } U< a(x,y), \\ x,& \textrm{otherwise} \end{cases} $$

The sequence $X_0, X_1, \ldots$ is a reversible Markov chain whose stationary probability is uniform.

  1. Compute the transition probabilities of the chain $\{𝑋_𝑛\}$, $𝑛 \in 𝑁$, of the Metropolis-Hastings

For $x \neq y$,

$$P_{xy} = \begin{cases} p(x,y) d(x) /d(y)= 1/d(y), & \textrm{if } \pi(y) p(y,x) \leq \pi(x) p(x,y), \\ p(x,y),& \textrm{otherwise} \end{cases} $$ i.e.,

$$P_{xy} = \begin{cases} 1/d(y), & \textrm{if } d(x) \leq d(y), \\ 1/d(x),& \textrm{otherwise} \end{cases} $$

  1. In steady state, it is equally likely to find the chain at any vertex. Indeed, to show that the detailed balance equations hold, if $\pi(y) p(y,x) \leq \pi(x) p(x,y)$,

$$\pi(x) P_{xy} = \pi(x)/d(y) = \pi(y) p(y,x)= \pi(y) P_{yx} $$

as desired. The other case follows similarly.

enter image description here

here is the output

enter image description here

Related Question