Expected time spent in a node between visits to another node in an asymmetric random walk on integers

combinatoricsmarkov chainsprobabilityrandom walk

Recall that the first passage time to state $i$ is the random variable $T_i$ defined by
\begin{equation*}
T_i(\omega)=\mathrm{inf}\{n\geq1:X_n(\omega)=i\}
\end{equation*}

where inf$\emptyset=\infty$. Let $(X_n)_{n\geq0}$ be a simple random walk on $\mathbb{Z}$ with $p_{i,i-1}=q<p=p_{i,i+1}$. Find
\begin{equation*}
\gamma^0_1=\mathbb{E}_0(\sum_{n=0}^{T_0-1}1_{X_n=1}).
\end{equation*}

My efforts:

\begin{equation*}
\gamma^0_1=\sum^\infty_{n=0}\mathrm{Pr}(X_n=1,n+1\leq T_0).
\end{equation*}

\begin{equation*}
\mathrm{Pr}(X_{2k}=1,2k+1\leq T_0)=0.
\end{equation*}

\begin{equation*}
\mathrm{Pr}(X_{2k+1}=1,2k+2\leq T_0)=c_kp^{k+1}q^k.
\end{equation*}

We only need to determine $c_k$, which is a complicated counting problem. I know $c_0=1,c_1=1,c_2=2,c_3=4$. We only need to consider the walk to the right hand side of 0. In the case of $X_{2k+1}=1,2k+2\leq T_0$, the farthest place that the walker can reach is $k+1$. Of course the walker does not necessarily reach $k+1$. It can just repeat between some nodes, e.g., $k-1$ and $k$.

This is a transient random walk. So we cannot use the stationary distribution to calculate this.

The generating function method of calculating number of steps returning to 0 starting from 1 doesn't help either. Number of steps taken to return to 0 and number of visits to 1 are two very different things, although you must pass 1 so as to return to 0.

There seems to be no way to avoid counting. So my problem is: what is a smart way to count $c_k$?

Best Answer

\begin{equation*} \gamma^0_0=1. \end{equation*} \begin{equation*} \gamma^0_i=q\gamma^0_{i+1}+p\gamma^0_{i-1},i\neq0. \end{equation*} Thus \begin{equation*} \gamma^0_i=A+(1-A)(\frac{p}{q})^i. \end{equation*} Consider the case of $i>0$. To visit $i+1$ and return to 0, the walker must visit $i$ at least twice. $\gamma^0_i$ is a non-increasing function of $i$. But $(p/q)^i$ is an increasing function. The best we can do is setting $A=1$ so that $\gamma^0_i=1$ for all $i>0$.

Consider the case to the left of 0. Still let $i>0$. We have \begin{equation*} \gamma^0_{-i}=A+(1-A)(\frac{q}{p})^i. \end{equation*} lim$_{i\rightarrow\infty}\gamma^0_{-i}=0$, so $A=0$ and $\gamma^0_{-i}=(q/p)^i$ for all $i>0$.

Related Question