[Math] Escape probabilities in a random walk.

markov chainsmarkov-processrandom walkstopping-times

So, a lot of theory in symmetric random walks seems to concentrate on 'hitting/stopping times' and things like that. So I started wondering…

How would I go about calculating the probability of never returning to a given state ($0$, say), in a random walk (starting at $0$, say) where you move $+1$ with probability $p > 1/2$ and $-1$ otherwise.

And then, could this be generalised to a whole interval, something like $[-10,10]$ for example?

Notes: I read somewhere on here that the probability of escaping zero with a random walk identical to mine is $2 – \frac{1}{p}$.

Best Answer

The abstract general recipe looks like this. Given a domain $D$, two sets $A,B \subset D$ and a Markov process with generator $L$, the probability to hit $A$ before hitting $B$ starting from $x$ solves the following problem

\begin{aligned} (Lu)(x) = 0 & \quad x \not \in A \cup B \\ u(x) = 1, & \quad x \in A \\ u(x) = 0, & \quad x \in B. \end{aligned}

As is this is not the problem that you want to solve, because your problem has no $B$. To get around this, you can solve a sequence of problems where you "move $B$ to infinity". In the case of the symmetric random walk on $\mathbb{Z}$ where you are interested in hitting zero, you can take $A=\{ 0 \},L=P-I$ where $P$ is the transition probability matrix, and $B=\{ n \}$, where $n$ is a large say positive integer. (Since the walk is not symmetric, the positivity assumption changes matters: the situation is different depending on whether the initial jump is to the left or to the right.) The result is a linear system of equations, which in the positive case looks something like

$$u_0 = 1 \\ (1-p)u_0 - u_1 + pu_2 = 0 \\ (1-p)u_1 - u_2 + pu_3 = 0 \\ \dots \\ (1-p)u_{n-2} - u_{n-1} + pu_n = 0 \\ u_n = 0.$$

This can be solved by recurrence relation techniques; for $p \neq 1/2$ the general solution is a combination of exponentials, to which you can adjoin the boundary conditions. In this case when you send $n \to \infty$, you will find that for all $k>0$, $u_k$ converges to zero. On the other hand, if you start on the left and keep the process biased towards moving to the right, then $u_k$ will converge to $1$ instead.

Note that a closely related recipe works for finding the expected time to hit $0$ starting from a point $x$.