Proof of the expected number of steps to walk to the next vertex in a path graph

graph theoryrandom walk

On "Random Walks on Graphs: a Survey" there's a proof of the expected number of steps to walk from vertex i to vertex j in a path with n nodes. As a first step it prompts us to notice that:

"First, observe that the access time $H(k-1, k)$ is one less than the expected return time of a random walk on a path with $k+1$ nodes, starting at the last node. As remarked, this return time is $2k$, so $H(k-1,k) = 2k – 1$."

The part in bold is what I can't picture at all. I can't see how this is true. Furthermore, is this return time of $2k$ true specifically for paths or for every graph?

Paper referred: LOVÁSZ, László. Random walks on graphs. Combinatorics, Paul erdos is eighty, v. 2, n. 1-46, p. 4, 1993.
Available at https://www.cs.yale.edu/publications/techreports/tr1029.pdf

Best Answer

The return time here is specific to the path graph. Going back to where this is derived:

  • For a random walk on a connected graph, the unique stationary distribution has $\pi(i) = \frac{\deg(i)}{2m}$. In particular, for a path with $k+1$ vertices, the stationary distribution is $\pi = (\frac1{2k}, \frac1k, \frac1k, \dots, \frac1k, \frac1{2k})$.
  • For any Markov chain, the return time to state $i$ is $\frac1{\pi(i)}$.

So for a path with $k+1$ vertices, the return time to an endpoint is $2k$.


To see the connection: on a path with vertices $0, 1, \dots, k$, the only way to leave $k$ and return is to:

  1. Step from $k$ to $k-1$. This always takes just $1$ step.
  2. Eventually get from $k-1$ back to $k$. This takes $H(k-1,k)$ steps by definition.

So the return time is is $1 + H(k-1,k)$. Since we know this is $2k$, we conclude $H(k-1,k) = 2k-1$.

On a path with vertices $0, 1, \dots, n$ with $n>k$, the hitting time $H(k-1,k)$ will be the same: starting from $k-1$, the "extra vertices" $k+1, \dots, n$ will not come into play before we reach $k$.

Related Question