Probability – Random Walks on Galton-Watson Trees

markov chainspr.probabilityrandom walksstochastic-processestrees

I am working on a paper of Elie Aidekon : ‘Speed of the biased random walk on a Galton–Watson tree’ and have a question about one transformation in a proof:
\begin{align}
& 1+\frac{1}{1-\lambda}+\mathbb{E} \biggl[ \frac{\beta_n(I) \mathbf{1}_{\{ \beta(j)=0 \ \forall j\neq I\}}}{\lambda-1+\sum_{i=1}^{\nu(e)}\beta_n(i)}\biggl] \\
\tag{1}
\label{1}
={} & \frac{\lambda}{1-\lambda}+\mathbb{E} \biggl[ \frac{\beta_n(I) \mathbf{1}_{\{ \beta(j)=0 \ \forall j\neq I\}}}{\lambda-1+\beta_n(I)}\biggl] \\
\tag{2}
\label{2}
={} &\frac{\lambda}{1-\lambda}+\mathbb{E} [\nu q^{\nu-1}] \mathbb{E} \biggl[ \frac{\beta_{n-1}(e)}{\lambda-1+\beta_{n-1}(e)}\biggl].
\end{align}

Some additional informations about it:

  • It holds that $\lambda<1$,

  • $\beta(e)=\mathcal{P}^{e}(\tau_{e_{*}}=\infty)$ is the probability that the random walk start in the root $e$ of a tree and never hits the parent $e_*$,

  • $\beta_n(e)=\mathcal{P}^{e}\bigl(\tau^{(n)}<\tau_{e_*}\bigl)$ is the probability that the random walk start in the root $e$ of a tree and hit level $n$ before it hit the parent $e_*$.

I absolutely don’t know where the $\frac{\lambda}{1-\lambda}$ in the \eqref{1} equation comes from and how the denominator short up to $\lambda-1+\beta_n(I)$. Have someone a tip for me?

In the \eqref{2} equation I know that the branching property is used, but I don’t see this. I just know the branching property defined by ‘lines’ and can’t see how to take this definition for markov chains (without lines).

Best Answer

It seems no one checked this calculation prior to the publication of this paper. The factor $\frac{\lambda}{1-\lambda}$ in (1) seems mistaken, it should be $\frac{2-\lambda}{1-\lambda}$. This has no effect on the finiteness claimed in the lemma. The denominator in (1) indeed takes the given form because all the other summands in that denominator vanish due to the indicator in the numerator.

To go from (1) to a variant of (2) observe that $$\beta_n(I) \mathbf{1}_{\{ \beta(j)=0 \ \forall j\neq I\}}=\sum_{i=1}^{\nu(e)}\mathbf{1}_{\{I=i\}} \beta_n(i) \mathbf{1}_{\{ \beta(j)=0 \ \forall j\neq I\}} \quad (*)\,.$$ Next we wish to take conditional expectations given $\nu(e)$. For each $i,j \le \nu(e)$ we have $$ \mathbb{E} \biggl[ \frac{\beta_n(i) \mathbf{1}_{\{I=i\}}}{\lambda-1+\beta_n(i)} \,\bigg| \, \nu(e)\biggl] =\frac{\beta_{n-1}(e)\mathbf{1}_S} {\lambda-1+\beta_n(e)} $$ by the branching property, and $P[\beta(j)=0 |\nu(e)] =q$ since $\lambda<1$. Thus using independence of the subtrees determined by the offspring of $e$, we get $$\mathbb{E} \biggl[ \frac{\beta_n(I) \mathbf{1}_{\{ \beta(j)=0 \ \forall j\neq I\}}}{\lambda-1+\sum_{i=1}^{\nu(e)}\beta_n(i)} \,\bigg| \, \nu(e)\biggl] \le \nu(e) q^{\nu(e)-1} \frac{\beta_{n-1}(e)\mathbf{1}_S} {\lambda-1+\beta_n(e)} \,. $$ Now take another expectation.

Related Question