[Math] Calculating Risk of Ruin for known profit/loss and probabilities

gamblingprobabilityrisk-assessment

In a scenario where I have an binary event where the outcome is uncertain and in one outcome, I profit and in the other I make a loss (and the sizes of the profit and loss are known), but overall I have a positive expectation from the transaction, I'd like to calculate the chances that if this event occurs many times, I lose some given amount before I overcome the variance of the transaction.

So for example, say I am making a bet which wins 90% of the time. When I win, I profit by $6, but when I lose my losses are $30. I believe my EV for this bet is $2.40:

$
(0.9 * 6) – (0.1 * 30) = 2.40
$

If I make this same bet over and over again an infinite number of times, I will show a profit – provided I don't run out of money to bet.

I would like to be able to calculate the chances that I lose some given amount of money (say, $2000) by making this same bet repeatedly.

I presume that I need to set some finite sample size to be able to say:

"My chances of losing the entire $2000 within x bets is y%"

but if it's possible calculate such that I can make a statement like:

"My chances of losing the initial $2000 if I make this bet for ever is z%"

that would be fine too. Assume that when I win, profits are added to my initial $2000 bankroll.

If anyone can help me understand the math to achieve this, that would be greatly appreciated. I've read a bit around online about Risk of Ruin with regards to investments and gambling and some stuff about Kelly Criterion, but I can't quite get my head around it, so hopefully this example will allow someone to illustrate to me how it can be done.

The goal behind this question is to determine a way to size bets according to an available bankroll to reduce Risk of Ruin to some "acceptable" level, say less than 0.1%.

Best Answer

Following the ideas in the suggested reading by Trurl, here is an outline of how to go about it in your case. The main complication in comparison with the linked random walk question is that the backward step is not $-1$. I'm going to go ahead and divide your amounts by 6 to simplify them, so if you win, you win $1$, and if you lose, you lose $m$. The specific example you give has $m = 5$, but it will work with any positive integer (I haven't tried to adapt to non-integer loss/win amount ratio).

Let's say that you start with $x$, and we want to find the probability of ruin if you play forever; call that probability $f(x)$. There are two ways that can happen: either you win the first round with a probability of $p$ (in your example $p = 0.9$), followed by a ruin from the new capital of $x+1$ with probability $f(x+1)$, or losing the first round with probability $1-p$ followed by ruin from capital $x-m$ with probability $f(x-m)$. So $$ f(x) = p f(x+1) + (1-p)f(x-m) $$ This is valid so long as $x > 0$ so that you can actually play the first round. For $x \leq 0$, $f(x) = 0$ (the reserve is already exhausted). If you rearrange the above equation, you can get a recursive formula for the function: $$ f(x+1) = \frac{f(x) - (1-p)f(x-m)}{p}. $$ But the problem is, while we know that $f(0)=0$, we can't use that to initiate the recursion, because the formula isn't valid for $x = 0$. So we need to find $f(1)$ in some other way, and this is where the random walk comes in.

Imagine starting with capital of $1$, and let $r_i$ define the probability of (eventually reaching) ruin by reaching the amount $-i$, but without reaching any value between $-i$ and $1$ before that. For example, $r_2$ is the probability of (sooner or later) getting from $1$ to exactly $-2$ (this might happen by winning a few rounds to get to $m-2$ and then losing the next round, for instance). $r_0$ is the probability of ruin by getting to $0$ exactly. So, how can that last event happen? Losing the first round would jump over $0$ straight to $1-m$, so the only possibility is winning (probability $p$), followed by either:

  • a ruin from $2$ straight to $0$ (over possibly many rounds; "straight" here refers to never passing through $1$ on the way), which is the same as from $1$ straight to $-1$ (probability of $r_1$); or
  • a "ruin" from $2$ to $1$ followed by ruin from $1$ to $0$ (probability of $r_0 \cdot r_0 = r_0^2$).

In other words, we have this equation: $$ r_0 = p(r_1 + r_0^2) $$ Similarly, by considering the possible "paths" that can take us from $1$ to $-i$ we get for each $i < m-1$ $$ r_i = p(r_{i+1} + r_0r_i) $$ The $i = m-1$ case is slightly different from the others, ruin from $1$ to $-(m-1)$ can happen either by losing the first round directly ($1-p$), or winning the first round to get to $2$ and then (eventually) dropping down from $2$ to $1$ and (again, eventually) ruin from $1$ to $-(m-1)$:

$$ r_{m-1} = (1-p) + p \cdot r_0 \cdot r_{m-1}. $$

In principle, one can solve all these equations for the $r_i$'s simultaneously, but we can use a trick to avoid that. Let $s = \sum_{i=0}^{m-1} r_i$ (this is the total probability of ruin when starting from $1$, that is, $f(1)$, which we wanted to find). Add up all the equations, and you get

$$ s = 1-p + p(s - r_0) +p r_0 s $$ Solving for $s$, $$ (1-p-pr_0) s = 1-p-pr_0 $$ which means that $s$ will have to be 1, unless $1-p-pr_0 = 0$. As that explanation of biased random walk proves rigorously, $s$ cannot be equal to 1 (your expectation value is positive, therefore statistically you must be moving away from $0$, not returning to it with certainty). We must conclude then that $1-p-pr_0 = 0$ which gives $r_0 = (1-p)/p$. It is easy to check that this leads to $r_i =(1-p)/p$ for all $i$ and indeed these values satisfy all the $r_i$ equations above. Thus finally, $$ f(1) = s = r_0 + r_1 + \dots + r_{m-1} = \frac{n(1-p)}{p}. $$

Using this as a a starting point, now you can use the recursive equation we got in the beginning to find $f(x)$ for any $x$. With bets of \$6, \$2000 rescales to $x=334$, so $f(334)$ gives you the risk of ruin. Or, first find which $x$ gives you a tolerable risk, and from that determine the appropriate size of the bets, $\$2000/x$.