$A$ and $B$ are opposite vertices of a cube. A spider sits on $A$ and takes steps randomly along any edge connected to the vertex it is on at that moment. This situation ends when the spider makes it to $B$. Compute the expected value of the number of moves the spider makes.
Expected value of time taken on random-walk on a cube
expected valuerandomrandom walk
Related Solutions
Extending to $d$-dimensions
So I had enough time to procrastinate today and I extended this problem to $d$ dimensions. I would appreciate if someone could read through the answer and suggest any simplification to final answer if possible. Do the final numbers $u_n$ constitute a nice well known sequence? Further, what other interesting problems arise out this ant and the cube problem. I think another nice problem is the expected cover time. Thanks
Working out explicitly for $d=3$ gave some nice hints and a good picture of what is going on. We use these to extend to any d-dimension.
The first thing is to define a cube in $d$-dimensions. A $\mathbf{d}$-dimensional cube is a set of vertices of the form $(i_1,i_2,\ldots,i_d)$ where $i_k \in \{0,1\}$. There is an edge between vertex $(i_1,i_2,\ldots,i_d)$ and vertex $(j_1,j_2,\ldots,j_d)$ if $\exists l$ such that $\left | i_l - j_l \right| = 1$ and $i_k = j_k$, $\forall k \neq l$.
Two vertices are said to be adjacent if they share an edge. It is easy to note that every vertex shares an edge with $d$ vertices. (Consider a vertex $(i_1,i_2,\ldots,i_d)$. Choose any $i_k$ and replace $i_k$ by $1-i_k$. This gives an adjacent vertex and hence there are $d$ adjacent vertices as $k$ can be any number from $1$ to $d$). Hence, the probability that an ant at a vertex will choose an edge is $\frac1{d}$.
For every vertex, $(i_1,i_2,\ldots,i_d)$ let $s((i_1,i_2,\ldots,i_d)) = \displaystyle \sum_{k=1}^d i_k$. Note that $s$ can take values from $0$ to $d$. There are $\binom{d}{r}$ vertices such that $s((i_1,i_2,\ldots,i_d)) = r$.
Let $v_{(i_1,i_2,\ldots,i_d)}$ denote the expected number of steps taken from $(i_1,i_2,\ldots,i_d)$ to reach $(1,1,\ldots,1)$
Let $S_r = \{ (i_1,i_2,\ldots,i_d) \in \{0,1\}^d: s((i_1,i_2,\ldots,i_d)) = r\}$
Claim: Consider two vertices say $a,b \in S_r$. Then $v_a = v_b$. The argument follows easily from symmetry. It can also be seen from writing down the equations and noting that the equations for $a$ and $b$ are symmetrical.
Further note that if $a \in S_r$, with $0 < r < d$, then any adjacent vertex of $a$ must be in $S_{r-1}$ or $S_{r+1}$. Any adjacent vertex of $(0,0,\ldots,0)$ belongs to $S_1$ and any adjacent vertex of $(1,1,\ldots,1)$ belongs to $S_{d-1}$. In fact, for any $a \in S_r$, $r$ adjacent vertices $\in S_{r-1}$ and $d-r$ adjacent vertices $\in S_{r+1}$.
Let $u_r$ denote the expected number of steps from any vertex $\in S_r$ to reach $(1,1,\ldots,1)$. For $r \in \{1,2,\ldots,d-1\}$, we have \begin{align} u_r & = 1 + \frac{r}{d} u_{r-1} + \frac{d-r}{d} u_{r+1} & r \in \{1,2,\ldots,d-1\}\\\ u_0 & = 1 + u_1 & r = 0\\\ u_d & = 0 & r = d \end{align} Setting up a matrix gives us a tai-diagonal system to be solved. Instead, we go about solving this as follows.
Let $p_r = \frac{r}{d}$, $\forall r \in \{1,2,\ldots,d-1\}$. Then the equations become \begin{align} u_r & = 1 + p_r u_{r-1} + (1-p_r) u_{r+1} & r \in \{1,2,\ldots,d-1\}\\\ u_0 & = 1 + (1-p_0) u_1 & r = 0\\\ u_d & = 0 & r = d \end{align} Let $a_{r} = u_{r+1} - u_r$. Then we get \begin{align} p_r a_{r-1} & = 1 + (1-p_r)a_r & r \in \{1,2,\ldots,d-1\}\\\ a_0 & = -1 & r = 0 \end{align} Note that $u_m = - \displaystyle \sum_{k=m}^{d-1} a_k$ and $u_d = 0$ \begin{align} a_0 & = -1 & r = 0\\\ a_{r} & = \frac{p_r}{1-p_r} a_{r-1} - \frac1{1-p_r} & r \in \{1,2,\ldots,d-1\} \end{align} Let $l_r = \frac{p_r}{1-p_r} = \frac{r}{d-r}$ \begin{align} a_0 & = -1 & r = 0\\\ a_{r} & = l_r a_{r-1} - (1+l_r) & r \in \{1,2,\ldots,d-1\} \end{align} \begin{align} a_1 &= l_1 a_0 - (1+l_1)\\\ a_2 & = l_2 l_1 a_0 - l_2(1+l_1) - (1+l_2)\\\ a_3 & = l_3 l_2 l_1 a_0 - l_3 l_2 (1+l_1) - l_3 (1+l_2) - (1+l_3)\\\ a_m & = \left( \prod_{k=1}^{m} l_k \right) a_0 - \displaystyle \sum_{k=1}^{m} \left((1+l_k) \left( \prod_{j=k+1}^m l_j \right) \right) \end{align} Since $a_0 = -1$ and $l_0 = 0$, we get \begin{align} a_m & = - \displaystyle \sum_{k=0}^{m} \left((1+l_k) \left( \prod_{j=k+1}^m l_j \right) \right) \end{align} Hence, \begin{align} u_n & = - \displaystyle \sum_{m=n}^{d-1} a_m\\\ u_n & = \displaystyle \sum_{m=n}^{d-1} \left( \displaystyle \sum_{k=0}^{m} \left((1+l_k) \left( \prod_{j=k+1}^m l_j \right) \right) \right)\\\ u_n & = \displaystyle \sum_{m=n}^{d-1} \left( \displaystyle \sum_{k=0}^{m} \left(\frac{d}{d-k} \left( \prod_{j=k+1}^m \frac{j}{d-j} \right) \right) \right)\\\ u_n & = \displaystyle \sum_{m=n}^{d-1} \frac{\displaystyle \sum_{k=0}^{m} \binom{d}{k}}{\binom{d-1}{m}} \end{align} Note that \begin{align} u_n & = \frac{\displaystyle \sum_{k=0}^{n} \binom{d}{k}}{\binom{d-1}{n}} + u_{n+1} & \forall n \in \{0,1,2,\ldots,d-2 \} \end{align} The expected number of steps from one vertex away is when $n = d-1$ and hence $u_{d-1} = 2^d-1$
The expected number of steps from two vertices away is when $n = d-2$ and hence $u_{d-2} = \frac{2d(2^{d-1} - 1)}{d-1}$
The answers for the expected number of steps from a vertex and two vertices away coincide with Douglas Zhare's comment
Initial Solution
Problems such as these fall in the category of Markov chains and one way to solve this is through first step analysis.
We shall denote the vertices of the cube by numbers from $1$ to $8$ with $1$ and $8$ being the opposite ends of the body diagonal.
Let $v_i$ denote the expected number of steps to reach the vertex numbered $8$ starting at vertex numbered $i$.
$v_1 = 1 + \frac{1}{3}(v_2 + v_4 + v_6)$; $v_2 = 1 + \frac{1}{3}(v_1 + v_3 + v_7)$; $v_3 = 1 + \frac{1}{3}(v_2 + v_4 + v_8)$; $v_4 = 1 + \frac{1}{3}(v_1 + v_3 + v_5)$; $v_5 = 1 + \frac{1}{3}(v_4 + v_6 + v_8)$; $v_6 = 1 + \frac{1}{3}(v_1 + v_5 + v_7)$; $v_7 = 1 + \frac{1}{3}(v_6 + v_2 + v_8)$; $v_8 = 0$;
Note that by symmetry you have $v_2 = v_4 = v_6$ and $v_3 = v_5 = v_7$.
Hence, $v_1 = 1 + v_2$ and $v_2 = 1 + \frac{1}{3}(v_1 + 2v_3)$ and $v_3 = 1 + \frac{2}{3} v_2$.
Solving we get $$\begin{align} v_1 & = 10\\ v_2 = v_4 = v_6 & = 9\\ v_3 = v_5 = v_7 & = 7 \end{align}$$
Hence, the expected number of steps to reach the diagonally opposite vertex is $10$.
$\hskip1.75in$
A random walk on the cube has eight states, but by symmetry we can reduce this to four states: start, nearest neighbors, further neighbors, and opposite. The "boundary" is the single state {o}.
Define $h$ as the expected time to hit the boundary starting at $x$, i.e., $h(x)=\mathbb{E}_x(T).$
First step analysis gives \begin{eqnarray*} h(s)&=&1+h(n)\vphantom{1\over3}\\[3pt] h(n)&=&1+{1\over3}\, h(s)+{2\over 3}\, h(f)\\[3pt] h(f)&=&1+{1\over3}\, 0+{2\over 3}\, h(n). \end{eqnarray*}
You can work out that $h(s)=10$.
Related Question
- [Math] Random walk on vertices of a cube
- [Math] wrong with this answer to: expected time of return to origin in random walk on edges of a cube
- Expected distance squared of random walk on an infinite hexagonal grid
- Expected value of random walk with optional stopping
- Random walk on a cube; expected time spent on the opposite node before returning
- Random walk on a cube (want mean number of visits)
Best Answer
I'm not certain what the standard notation for this is, but here's my proprietary method for expected value questions: Call the expected value for time taken from the starting vertex $v_3$ (this is what we want), the expected value for vertices with Manhattan distance $2$ $v_2$, the expected value for vertices with Manhattan distance $1$ $v_1$, and the expected value for the ending vertex $v_0$. From $v_3$, there is a $\frac{3}{3}=1$ probability of ending up at $v_2$. I write this as $v_3=v_2+1$. From $v_2$, there is a $\frac{2}{3}$ probability of ending up at $v_1$ and a $\frac{1}{3}$ probability of ending up at $v_3$. I write this as $v_2=\frac{2}{3}v_1+\frac{1}{3}v_3+1$. From $v_1$, there is a $\frac{1}{3}$ probability of ending up at $v_0$ and a $\frac{2}{3}$ probability of ending up at $v_2$. I write this as $v_1=\frac{1}{3}v_0+\frac{2}{3}v_2+1$. But the expected time taken from $v_0=0$, so we can leave that out, getting $v_1=\frac{2}{3}v_2+1$. We now have a system of equations and want to solve for $v_3$:
$$v_3=v_2+1$$ $$v_2=\frac{2}{3}v_1+\frac{1}{3}v_3+1$$ $$v_1=\frac{2}{3}v_2+1$$
So $v_2=\frac{2}{3}v_1+\frac{1}{3}(v_2+1)+1$, or $\frac{2}{3}v_2=\frac{2}{3}v_1+\frac{4}{3}$, or $v_2=v_1+2$. But we also know that $v_1=\frac{2}{3}v_2+1$, so $v_2=(\frac{2}{3}v_2+1)+2=\frac{2}{3}v_2+3$, or $\frac{1}{3}v_2=3$, or $v_2=9$. Then $\boxed{v_3=9+1=10}$.
Honestly, this method is probably just a stripped-down version of a more formal way of writing down the same method of solving- I don't remember- but it's always worked for me in the past, and that's actually why I suspect it's equivalent to known methods.