Calculating return time / showing null recurrence of discrete Markov chain

combinatoricsmarkov chainsmarkov-processprobabilitystochastic-processes

Recently I have been attempting a question on continuous time Markov chains, and one of the parts prompts me to verify the null recurrence of a CTMC's jump chain, which is a discrete Markov chain.

The jump chain $Y_n$'s transition probabilities are:

$\pi_{i, i+1} = \frac{1}{2} = \pi_{i, i-1}; \pi_{0,1} = 1$

i.e. it is a 1-dimensional random walk with a reflecting boundary at 0.

Now to verify the null recurrence of the chain I think we need to show:

a) $Y_n$ is recurrent – since $Y_n$ is irreducible I'd want to show the recurrence of state $0$,

b) the expected return time to $0$ is infinite.

In order to calculate the recurrence of 0, I was trying to find the sum $\sum_{n=1}^{\infty} p_{00}^{(n)}$, and for that I need $p_{00}^{(n)}$ individually.

Now only $p_{00}^{(2n)} \neq 0$ as we must take the same no of steps to the left and right. My idea so far to calculate $p_{00}^{(2n)}$ is to use the reflection principle about $0$ on the random walk without the reflecting boundary – however that has not worked out well so far as I could not find a bijection between the paths that use the negative states and paths that do not.

I'm wondering if anyone could provide some pointers? I have a feeling there may be something wrong with my approach thus far.

Best Answer

First, I would suggest that you don't use this 'reflected' chain, but just use a symmetric SRW on $\mathbb Z$. Note that this clearly has the same 'return to $0$' probability: it doesn't matter if you go into the negative or positive side.

Even without this, though, we want to do a (strong) Markov property argument, conditioning on the first step. Write $$\tau_j = \inf\{k \ge 0 \mid X_k = j\}.$$ For the 'return to $0$' probability,s ince this first step must be to $1$, we want to show that $\mathbb E_1(\tau_0) = \infty$.

Let's see what happens after the first step (this is very often a good tactic): $$ \mathbb E_1(\tau_0) = \tfrac12 \mathbb E_0(\tau_0) + \tfrac12 \mathbb E_2(\tau_0) + 1. $$ Now, clearly $\mathbb E_0(\tau_0) = 0$ (since no need to move anywhere).

Here comes another very useful realisation: to go from $2$ to $0$, we must go through $1$, and so applying the strong Markov property at the time we hit $1$, we see that $\mathbb E_2(\tau_0) = 2 \cdot \mathbb E_1(\tau_0)$. The intuition for this is clear: the time it takes to go from $2$ to $1$ is the same in expectation (in distribution, even) as from $1$ to $0$; the strong Markov property makes this rigorous. hence our equation becomes $$ \mathbb E_1(\tau_0) = \mathbb E_1(\tau_0) + 1. $$ But this clearly has no (finite) solution, and so we deduce that $\mathbb E_1(\tau_0) = \infty$.