[Math] What equity is necessary to offer the doubling cube in Backgammon (dice game)

diceprobabilityrecreational-mathematics

Edit: This question is a lot shorter than it is. Don't get intimidated. If you know backgammon, just skip to question 2.

In Backgammon, each game is played for one point (or one dollar) between two players. There is a die, called the doubling cube, which has the numbers $2, 4, 8, …$, in the middle of the board (it is not 'owned' by anyone). The players take turns rolling two regular dice (not the doubling cube) and moving. But before each roll, a player can 'offer the cube' (or 'offer to double', 'double', etc.) which is basically saying "Hey, why don't we play this game for 2 points". The other player can drop, or refuse the cube, and lose one point, or may accept, or take, the cube, in which case the game continues for twice as many points as before.

A player's equity in the game is the probability that he'll win a cubeless game, where cubeless means neither player can offer the cube (or a one point game). (For backgammon players who know the rules, I'm ignoring gammons and backgammons, so the equity equals the probability of winning. Edit: As @Henning Makholm's first answer indicates, I also do not want to include the equity of owning the doubling cube.

I have two questions, but I know the answer to the first one and I think I'm calculating it right.

1) What equity does the player receiving the cube require in order to accept it? Answer, I'm told, is $.25$?

The receiving player will accept the double when the expected value of taking it is greater than the expected value of dropping (which automatically loses him $1$ point). $p$ is the probability (equals equity) of the receiving player winning (are you guys following all this?).

$E(take) \ge E(drop) \\
2p-2(1-p) \ge -1 \\p > .25$

I'm nearly certain that's correct, so

2) What equity is required for a player to offer the cube (offer to double the game's stakes)?

How is that calculated? We don't know whether the cube's recipient will accept or not.

I start the same as question 1: The giver will double when his expected value of doubling is greater than his EV of not doubing (duh!). If $EV(rolling)$ is $p-(1-p)$, and $EV(doubling)$ is $2p-2(1-p)$, then

$E(doubling) > E(rolling) \\
2p-2(1-p) > p-(1-p) \\
p > .5$

Which can't possibly be correct. While I'm not BG expert, I did used to play for (small amounts) of money in NYC. There is no way in heck that I would double with 51% chances.

OK, that's all I got. How do we figure this out?
Thanks.

Best Answer

Here's an analysis that tries to take into account redoublings and cube ownership. It is based on completely ignoring the actual gameplay of backgammon and substituting the (quite counterfactual) assumption that the underlying game is a simple Brownian motion: The game starts by placing a counter a point 0.5 on a scale from 0 to 1. The counter then performs a one-dimensional continuous-time random walk. When it reaches either 1 or 0, the game ends and player A or B, respectively, is declared the winner. While the game is ongoing, the two players have the options of doubling and redoubling at arbitrary times, but otherwise according to the backgammon doubling rules.

In this game, the only choice a player has to make is when to offer the cube. His strategy can be summarized by two numbers $k$ and $\lambda$. When a player owns the doubling cube, he will offer a redouble as soon as the position is $k$ or more; he will offer the first doubling when the position reaches $\lambda$ for the first time. The situation before the first doubling is appreciably different from when the cube has an owner, so $\lambda$ can differ from $k$, but since a random walk is symmetric under time shifts, there is no reason to consider strategies where $k$ changes as the game ages.

Let's find the optimal $k$ first. Consider two functions $f$ and $g$ such that $f(p)$ is the expected value of the game (for the player who wins at $p=1$, and assuming optimal play) at position $p$, given that the player owns the cube, and $g(p)$ is the expected value when the player doesn't own the cube. These expected values are always between $0$ and $1$; we imagine that we have already paid $\frac 12$ into the pot that the winner will take home.

Because of symmetry we must have (if both players play optimally, which in particular means that their $k$s are the same): $$g(p)=1-f(1-p)$$ Look at the value of the game at position $k$ when we're just about to offer a redouble; let's call this value $v$. Then $$v=f(k) = \min(1, 2 g(k) - \frac12)$$ The $\min$ is because the opponent will only accept the doubling if doing so will be more advantageous to him than refusing. Subtracting $\frac 12$ accounts for our share of doubling the pot (or in other words, for the risk that we may eventually lose 2 units rather than 1).

Now, it is clear that we should redouble at least as soon as the point where a rational opponent would refuse it -- from that point, waiting any longer is not going to yield us anything. So we can do away with the $\min(1,\ldots)$ and just remember that $k$ must be chosen such that $v\le 1$. We then have $$\tag{1} v = 2g(k)-\frac12 = 2(1 - f(1-k))-\frac 12 = \frac32 - 2f(1-k)$$

When $p$ is between $0$ and $k$, the value of the game depends on the probability that the position will reach $k$ before it reaches $0$. By a wonderful property of Brownian motion, this probability is simply $p/k$, so we have $$\tag{2} f(p) = \frac pk f(k) = \frac vk p$$ Clearly, for optimal play we must choose $k$ such that the proportionality constant $\frac vk$ is as large as possible. To find the relation between $v$ and $k$, specialize (2) to $p=1-k$: $$ f(1-k) = \frac{1-k}{k} v$$ and telescope that into (1): $$ v = \frac 32 - 2\frac{1-k}{k} v \quad\Longrightarrow\quad v = \frac{3k}{4-2k}$$ Thus $\frac vk$, which we're trying to maximize, is $\frac{3}{4-2k}$. This increases monotonically with $k$, so we want to have $k$ as large as possible. But, as argued previously, we cannot have $v>1$, so we find the optimal $k$ by solving $1=v=\frac{3k}{4-2k}$ for $k$. This yields $$k=0.8$$ for optimal play once the doubling cube is owned.

We're now ready to find $\lambda$. At the beginning of the game the situation is symmetric, so if both players follow the same (optimal) strategy, each player will make the first doubling offer with probability $\frac 12$. The objective is then to maximize the value of the game after that first doubling happens: $$ g(\lambda) = 1-f(1-\lambda) = 1 - \frac{1-\lambda}k = 1.25\lambda - 0.25$$ which is maximized by choosing $\lambda$ as large as possible. But as before, choosing a $\lambda$ so large that the opponent refuses our doubling is just a waste. So in fact $\lambda$ should be chosen just at the threshold where the opponent would start refusing the doubling. But that happens to be the same criterion as was used to find $k$, so in fact $\lambda=k$ is optimal.

Conclusion: For optimal play with doublings and redoublings, assuming that backgammon can be modeled as a Brownian motion:

  • Offer to double or redouble as soon as your position is 80% or more.
  • Accept a doubling or redoubling offer if your position is better than 20%.

Exercise: prove that with this strategy, no matter which strategy your opponent follows, your net expected outcome of a game is never negative.

Related Question