The probability of winning a peculiar dice game

convergence-divergenceprobabilitysequences-and-series

I'm a high school teacher, and students in my probability class created the following fun conundrum. We've been stuck on it for a couple of weeks:

  • Consider this dice game played with one fair $n$-sided dice.
  • On the first turn, a roll of $n$ wins, while a roll of $1$ loses. On any other result, the player rolls again.
  • On the 2nd roll, a roll of $n$ wins, while a roll of $1$ or $2$ loses. On any other roll, the game continues.
  • On roll $k$, the player wins with a roll of $n$ and loses with a roll of $k$ or below.
  • What is the probability of winning as $n \to \infty$ ?.

The game must be won in no more than $n – 1$ turns, and for any given $n$ ,
$$
\mathrm{P}\left(win\right)
=
{1 \over n} +
\sum_{i = 2}^{n – 1}\frac{\left(n – 2\right)!}{\left(n – i – 1\right)!\, n^{i}}
$$

Here is where I'm stuck. Does:
$$
\lim_{n \to \infty}\mathrm{P}\left(win\right) = 0?
$$

or does $\mathrm{P}\left(win\right)$ converge on some other nonzero probability as $n \to \infty$ ?.
How might one show this ?.

Best Answer

I cannot prove that $P(win) \approx {1 \over \sqrt{n}}$, but here is a proof that:

Claim: $P(win) \le {2 \over \sqrt{n}}$

which of course implies $P(win) \to 0$, answering the OP question.

Proof: For convenience, let $G =$ the OP's original game. Consider some large, fixed $n$. Define:

  • event $A =$ win $G$ on or before $\sqrt{n}$ turns

  • event $B =$ win $G$ after $\sqrt{n}$ turns

We will separately bound $P(A), P(B)$ both $\le {1 \over \sqrt{n}}$. The main claim then follows because $P(win) = P(A)+P(B)$.

Lemma: $P(A) \le {1\over \sqrt{n}}$: Imagine a modified game $G'$ where the die is always rolled for all $n$ rounds, and the game result is the first winning or losing roll. This modified game $G'$ is clearly equivalent to the OP's truncated version $G$.

Let random variable $X=$ no. of winning rolls among the initial $\sqrt{n}$ rounds of $G'$. To win on or before $\sqrt{n}$ rounds, it is necessary (though not sufficient) that $X\ge 1$, i.e. $P(A) \le P(X\ge 1)$. Meanwhile, for any non-negative integer r.v.s, we have:

$$P(X\ge 1) = \sum_{k=1}^\infty P(X=k) \le \sum_{k=1}^\infty kP(X=k) = E[X]$$

In this case, by linearity, $E[X] = {1\over n} \sqrt{n} = {1 \over \sqrt{n}}.$ Combining, we have:

$$P(A) \le P(X\ge 1) \le E[X] = {1 \over \sqrt{n}} \;\;\; \square$$

Lemma: $P(B) \le {1 \over \sqrt{n}}$: First, define:

  • event $E= $ game $G$ is inconclusive (neither won nor lost) after $\sqrt{n}$ rounds

Note that event $B$ requires event $E$, i.e. $P(B) = P(B \cap E) = P(E) P(B \mid E)$.

Now imagine two different modified games. In $G_1$, the game starts with $1$ to $\sqrt{n}$ as the losing numbers and adds one more losing number every round. By construction, $P(\text{win }G_1) = P(B \mid E)$.

In $G_2$, the game starts with $1$ to $\sqrt{n}$ as the losing numbers and the set of losing numbers doesn't change for the rest of the game. Clearly, $G_2$ is easier to win than $G_1$ (e.g. via a sample-point by sample-point dominance argument).

The modified game $G_2$ is not limited to $n$ rounds, but it does terminate with probability $1$. Therefore, the ratio:

$${P(\text{win }G_2) \over P(\text{lose }G_2)} = {P(\text{win }G_2 \mid G_2 \text{ terminates}) \over P(\text{lose }G_2 \mid G_2 \text{ terminates})} = {\text{no. of winning rolls} \over \text{no. of losing rolls}} = {1 \over \sqrt{n}}$$

The easiest proof of the above is to consider $G_2$ as a $3$-state Markov chain with $2$ absorbing states, for win and loss. Alternately one can consider there to be $1+\sqrt{n}$ terminating states, by symmetry all equally likely, and then color exactly $1$ of them as "winning" and the others as "losing".

Combining everything, we have:

$$P(B) = P(E) P(B \mid E) \le P(B \mid E) = P(\text{win }G_1) \le P(\text{win }G_2) = {1 \over 1 + \sqrt{n}} < {1 \over \sqrt{n}} \;\;\; \square$$

Related Question