Consider a random walk with absorbing barriers at $0$ and $3$. $\mathbb P(S_{n+1}-S_n=1)=0.6$ and $\mathbb P(S_{n+1}-S_n=-1)=0.4$. What is the probability of eventual absorption at $0$, given that the starting point is $S_0=1$?
[Math] Random walk with absorbing barriers
markov chainsprobabilityrandom walk
Related Solutions
Your question is equivalent to asking: what is the probability that the walk starts at k and visits N before visiting 0?
This question can be approached, for example, using difference equations. To this end, denote by $X_t$ the value of the process at time t and $T_x$ the number of steps to hit x for the first time. We are looking for the probability $\mathbb{P}(T_0 > T_N \ \vert X_0 = k) = \mathbb{P}_k(T_0 > T_N) = u_k$.
Firstly, we can condition on the first step that the process makes:
$\mathbb{P}_k(T_0 > T_N) = \mathbb{P}_k(T_0 > T_N \ \vert \ X_1 = k+1)\mathbb{P}(X_1 = k+1) + \mathbb{P}_k(T_0 > T_N \ \vert \ X_1 = k-1)\mathbb{P}(X_1 = k-1)$
Note that in our example $\mathbb{P}(X_1 = k-1) = \mathbb{P}(X_1 = k+1) = \frac{1}{2}$. Moreover, we can use the Markov property to write: $\mathbb{P}_k(T_0 > T_N \ \vert \ X_1 = k+1) = \mathbb{P}_{k+1}(T_0 > T_N) = u_{k+1}$. Using those observations we can rewrite the first line as the following recursion: \begin{align} u_k = \frac{1}{2}u_{k+1} + \frac{1}{2}u_{k-1} \end{align}
The initial conditions are: $u_N = 1$ and $u_0 = 0$. The characteristic equation for this recursion is: $\lambda = \frac{1}{2}\lambda^2 + \frac{1}{2}$, which gives the double root $\lambda = 1$. Therefore the general solution is given by \begin{align} u_k = A+Bk \end{align} Using the initial conditions we get $A = 0$ and $B = \frac{1}{N}$, which gives $u_{k} = \frac{k}{N}$.
A note about your solution
Firstly, a sense check comes into play: if we assume $k=0$ does the probability equal zero? In your case it is a NO, so the answer cannot be correct.
What went wrong? As Tigran pointed out, your calculation assumes that the walk goes straight into N, i.e. the walk reaches N in $N-k$ steps. However, you should remember that at any time before the walk is absorbed it may also turn left and go towards zero.
Modeling the process
Define $\mathcal{S}$ as the set of possible values for the Markov chain: $$\mathcal{S} = \{0, 0.5, 1, 1.5, …, 9.5, 10\}$$ Note that $S_0=5$ and $S_n \in \mathcal{S}$ for all $n \in \{0, 1, 2, …\}$. We have $$S_{n+1} = S_n + A_n \quad \forall n \in \{0, 1, 2, ...\} $$ where $$ A_n = \left\{ \begin{array}{ll} (1/2)B_n &\mbox{ if $S_n \notin \{0, 10\}$} \\ 0 & \mbox{ otherwise} \end{array} \right.$$ where $\{B_n\}_{n=0}^{\infty}$ is an i.i.d. sequence with $P[B_n=1]=P[B_n=-1]=1/2$. Then $$\boxed{E[A_n|S_n=s] = 0 \quad, \forall s \in \mathcal{S}} \quad (Eq. 1) $$
Mean
So for each $n \in \{0, 1, 2, ...\}$ we have \begin{align} E[S_{n+1}] &\overset{(a)}{=} \sum_{s \in \mathcal{S}}E[S_{n+1}|S_n=s]P[S_n=s] \\ &\overset{(b)}{=} \sum_{s \in \mathcal{S}}E[S_n + A_n|S_n=s]P[S_n=s] \\ &= \sum_{s \in \mathcal{S}}E[s + A_n|S_n=s]P[S_n=s] \\ &= \sum_{s \in \mathcal{S}}(s + E[A_n|S_n=s])P[S_n=s] \\ &\overset{(c)}{=} \sum_{s \in \mathcal{S}}sP[S_n=s] \\ &\overset{(d)}{=} E[S_n] \end{align} where (a) holds by the law of total expectation; (b) holds by the fact $S_{n+1}=S_n+A_n$; (c) holds by Eq. (1); (d) holds by definition of expectation. Since $E[S_0]=5$ we conclude: $$\boxed{E[S_n]=5 \quad \forall n \in \{0, 1, 2, … \}}$$
Limiting variance
We know $E[S_n]=5$ for all $n$ and so $$Var(S_n) = E[(S_n-5)^2] = \sum_{s \in \mathcal{S}}(s-5)^2P[S_n=s] $$ Since the process is equally likely to end up at state $0$ or $10$ we have \begin{align} \lim_{n\rightarrow\infty} P[S_n=0] &= 1/2\\ \lim_{n\rightarrow\infty} P[S_n=10] &= 1/2\\ \lim_{n\rightarrow\infty} P[S_n=s] &= 0 \quad \forall s \notin \{0, 10\} \end{align} so $$ \boxed{\lim_{n\rightarrow\infty} Var(S_n) = (0-5)^2(1/2) + (10-5)^2(1/2) = 25} $$
Details on variance
Squaring the equation $S_{n+1} = S_n + A_n$ gives $$S_{n+1}^2 = (S_n+A_n)^2 = S_n^2 + 2S_nA_n + A_n^2 $$ So $$E[S_{n+1}^2|S_n] = S_n^2 + 2S_nE[A_n|S_n] + E[A_n^2|S_n] = S_n^2 + 0 + (1/4)1_{\{S_n \notin\{0, 10\}\}}$$ where $1_{\{S_n \notin\{0, 10\}\}}$ is an indicator function that is 1 if $S_n \notin \{0,10\}$ and is 0 else. So $$E[S_{n+1}^2] = E[S_n^2] + (1/4)P[S_n \notin \{0,10\}]$$ Subtracting 25 from both sides gives $$ Var(S_{n+1}) = Var(S_n) + (1/4)P[S_n \notin \{0,10\}]$$ and $Var(S_0)=0$ so $$ \boxed{Var(S_n) = (1/4)\sum_{i=0}^{n-1} P[S_i \notin \{0,10\}] \quad \forall n\in \{1, 2, 3, ...\} } $$ Since $P[S_i \notin \{0,10\}] = 1$ for $i \in \{0, 1, 2, 3, ..., 9\}$ we have $$\boxed{Var(S_1)=1/4, Var(S_2)=2/4, Var(S_3) = 3/4, ..., Var(S_{10})= 10/4}$$ On the other hand: $$ Var(S_{11}) = 10/4 + (1/4)\underbrace{(1-2(1/2)^{10})}_{P[S_{10}\notin\{0,10\}]}$$ In general, the variance increases as $n\rightarrow\infty$ to approach a limiting value of $25$. It is possible to compute $P[S_i \notin \{0,10\}]$ for all $i$ (for example, by taking powers of a transition probability matrix), but this calculation is more involved.
Best Answer
First, form the transition matrix corresponding to the random walk with 0 and 3 as absorbing states: $$P=\begin{bmatrix} 1&0&0&0\\2/5&0&3/5&0\\0&2/5&0&3/5\\0&0&0&1 \end{bmatrix}.$$ Then, rearrange to have the absorbing states first: $$\tilde{P}=\left[ \begin{array}{c|c} I & 0 \\ \hline S & Q \end{array} \right]=\begin{bmatrix} 1&0&0&0\\0&1&0&0\\2/5&0&3/5&0\\0&2/5&0&3/5 \end{bmatrix}.$$
Then, compute $M=(I-Q)^{-1},$ and $M\cdot S$. This yields $$M\cdot S = \begin{bmatrix}10/19&9/19\\4/19&15/19 \end{bmatrix} $$ These entries are the probabilities of starting at state 1 or 2 and being absorbed at 0 or 3. So, if the chain starts at state 1, for example, the probability of eventual absorption at 0 is 10/19.
FUN with basic linear algebra!