Imagine m balls being exchanged between two adjacent chambers (left and right) according to the following rules. At each time step, one of the m balls is randomly selected and moved to the opposite chamber, i.e., if the selected ball is currently in the right chamber, it will be moved to the left one, and vice versa. Let $X_n$ be the number of balls in the left chamber after the nth exchange. For m=3 I want to find all the one step transition probabilities. I know the state space will be {0,1,2,3} and that I am looking for Probabilities, when it goes from 0->1, 1->0, 2->1, 1->2, 3->2, 2->3. I am struggling with how to account for the fact that the balls can have different starting positions? For example going from 1->0 you can either pick the one ball in the left chamber and move it, or pick one of the two balls in the right chamber and move it to the left making it a 1->2 transition, so what would the probability for something like that look like?
[Math] One-step transition probabilities for a markov chain
discrete mathematicsmarkov chainsprobability
Related Solutions
I think you already have a decent answer to the problem. I would love to share another perspective of view.
For (b), since the balls are uniformly chosen each time, it is natural to conjecture that in the long run, the distribution of the balls in two urns is as if we uniformly choose $m$ balls from $m$ white balls and $m$ black balls, and put these $m$ chosen balls in urn 1 and the rest of $m$ balls in urn 2. Therefore, if the conjecture is true, we then have $$ \pi_i = \frac{ {{m}\choose{i}} {{m}\choose{m-i}} }{ {{2m}\choose{m}} }, $$ where there are in total ${{2m}\choose{m}}$ ways of choosing $m$ balls from $2m$ balls (ignoring order) and to have $i$ black balls in urn 1, we have to choose $i$ black balls (from $m$ black balls in total) and $m-i$ white balls (from $m$ balls in total). So far, it is merely an intuitive argument. For (c), to verify that the $\{\pi_i\}$ are indeed the stationary probabilities, since the Markov chain is inreducible and positive recurrent, the stationary equations have only a unique solution, therefore it suffices to verify that $\{\pi_i\}$ satisfy the stationary equations.
In particular, the Markov chain can only transition to neighbouring states, (also as indicated in the problem that the MC is time reversible) so we are encouraged to verify $$ \pi_{i} P_{i j}=\pi_{j} P_{j i} \quad \text { for all } i, j, $$ and $$ \sum_{i=0}^{m} \pi_{i} = 1 . $$ If this is true, we know that (1) the MC is indeed time reversible and (2) $\{\pi_i\}$ are indeed the stationary probs. (See, for example, Section 4.8 of Ross Probability Models 12th Edition.) The former eqn can be verified after a little algebra, and the latter eqn follows from our previous probabilistic argument (for (b)).
I said in comments that I thought you do not have information from the long term distribution about moving left or right, and only partial information about moving up or down. But you can say that the transition probability of moving from the bottom to the middle row is double the transition probability of moving from the middle row to the bottom row, while the transition probability of moving from the middle to the top row is $1.5$ times the transition probability of moving from the top row to the middle row
I am still not clear about the question, but let's suppose any answer meeting the condition will do, so then you could have for example
- $\Pr(1 \to 2)= \Pr(1 \to 4) = \Pr(2 \to 1)= \Pr(2\to 3)=\Pr (2 \to 5) = \Pr(3 \to 2)$ $=\Pr(3 \to 6) = 0.3$
- $\Pr(4 \to 1)= \Pr(4 \to 5) = \Pr(4 \to 7)= \Pr(5\to 2)=\Pr (5 \to 4) = \Pr(5 \to 6)$ $=\Pr(5 \to 8) =\Pr(6 \to 3) =\Pr(6 \to 5) =\Pr(6 \to 9) = 0.15$
- $\Pr(7 \to 4)= \Pr(7 \to 8) = \Pr(8 \to 5)= \Pr(8\to 7)=\Pr (8 \to 9) = \Pr(9 \to 6)$ $=\Pr(9 \to 8) = 0.1$
implying probabilities of no movement in a particular time step of
- $\Pr(1 \to 1) = 0.4$, $\Pr(2 \to 2) = 0.1$, $\Pr(3 \to 3) = 0.4$, $\Pr(4 \to 4) = 0.55$, $\Pr(5 \to 5) = 0.4$, $\Pr(6 \to 6) = 0.55$, $\Pr(7 \to 7) = 0.8$, $\Pr(8 \to 8) = 0.7$, $\Pr(9 \to 9) = 0.8$
If you simulate this with any starting position, I would expect that after say $100$ steps you would find the probability of each of the positions $1$ to $3$ having probability close to $\frac1{18}$, of each of the positions $4$ to $6$ having probability close to $\frac1{9}$, and of each of the positions $7$ to $9$ having probability close to $\frac1{6}$, adding up by row to $\frac16$, $\frac13$ and $\frac12$, which is what the question asked for
Best Answer
$\{0,1,2,3\}$ is the set of states, that is, the set of the possible numbers of balls in the left chamber. Assume that the system is in state $i$ ($i=0,1,2,3$). The probability that the sytem goes to state $i-1$ is $\frac i3$ because this is the probability that one selects a ball from the left box. The probability that the system goes to state $i+1$ is $\frac{3-i}3$ because this is the probability that one selects a ball from the right box.
For example, if the system is in state $1$ then there is only two possible transitions, as shown below
The system can go to state $2$ (with probability $\frac23$) or to state $0$ (with probability $\frac13$).
The state transition probability matrix is then
$$P= \begin{bmatrix} 0&1&0&0\\ \frac13&0&\frac23&0\\ 0&\frac23&0&\frac13\\ 0&0&1&0 \end{bmatrix}.$$
(Here the rows are assigned to the present state and the columns to the next state.)