Let $G$ be the $k$-regular tree (a tree where every vertex has degree $k$), and let $X_n$ be a random walk on $G$ that follows the transition probabilities induced by the edge weights of $G$, which are uniformly 1. Let $d_n$ be the distance of $X_n$ from it's starting position after $n$ steps.
I am trying to show that $d_n/n$ almost surely converges to some limit, and I am also trying to find this limit (as a function of $k$).
I tried computing the case where $X_n$ starts from the root by finding the number of ways that $d_n = m$ for some $m < n$, and using that to get an expression for $E(d_n)$. I ended up getting an incredibly complicated expression, that was not at all illustrative in terms of solving the above problem.
I figure that I must be approaching this problem wrong, and that there must be some other way to look at it, which is why I am posting here, hoping that somebody can point me in the right direction.
Best Answer
Finding an exact expression for $d_n$ is invariably going to lead to a messy formula that's probably the one you discovered, but it's not necessary to know the exact value of $d_n$ to find the limit.
First, there is a way to simplify the random walk. For our purposes, all vertices of $G$ at distance $d$ from the root are functionally identical: they all have a $\frac1k$ probability of going up to a vertex at distance $d-1$ from the root, and a $1-\frac1k$ probability of going down to a vertex at distance $d+1$ from the root. So instead of considering a random walk on $G$, we can consider a biased random walk on the states $\{0,1,2,\dots\}$ where each step is $+1$ with a probability of $1-\frac1k$ and $-1$ with a probability of $\frac1k$. (Except at $0$, which goes to $1$ with probability $1$.)
It's easier to solve for the behavior of a related Markov chain, which is the same random walk, but on the integers $\{\dots,-2,-1,0,1,2,\dots\}$, with no awkward boundary condition. Here, the expected state after $n$ steps is $\frac nk \cdot (-1) + n \cdot(1-\frac1k) \cdot (+1) = n(1 - \frac2k)$. To go from this to the statement $\lim_{n \to \infty} \frac{d_n}{n} = 1 -\frac2k$, we need some concentration inequality. Chebyshev's inequality should be sufficient, but in this case, we can use a Chernoff-type bound if we want to get tighter concentration, as well.
The random walk we got for analyzing the tree is slightly different: when we're at $0$, we don't have a $\frac1k$ chance of moving to $-1$. However, we can argue that the same analysis applies: in both random walks, you only visit $0$ finitely many times, with probability $1$ (and in the random walk on the integers, with probability $1$ we eventually stay in the positive states forever). So the limit should be $1 - \frac2k$ in your case as well.