Probability with an increase rate of success upon failure and reset upon success

probability

Consider the following.
You have an initial chance of success of say 20%. Each attempt you fail will increase your chances by 10% with a maximum of 5 times for a total of 70%.

However, each success will reset your chances back to the initial 20%.

How can I find the overall probability of success?

A similar real life example would be if you have to pick up your name from a hat x number of times but every time you draw your name all previously picked up names along with yours get put back into the hat and you want to know how many times you could get your name out of the hat after X number of attempts. Assuming you have 10 different names, your initial chance of success would be 1/10, but if the name you pick up is not yours, that increases your chances to 1/9, then 1/8 and so on until you pick up your name in which case all names get added back making your chances reset to 1/10. So if you were to do this 1000 times you have more chances than just the 10% (1/10) since your chances increase with each failure but also your chances won't be even close to 100% (10/10) because they keep resetting. The only difference is that in this scenario, there's no unbound cap for the probability as it would eventually be 100 while in my problem the cap can be less than 100%.

Some more context, if I run this simulation doing 10 tests with a sample of 1000 attempts each I get the following:

Scenario 1:
Initial Chance: 20%
Increment on failure: 10%
Max no. of increments: 5 (which can increase the chance up to 70%)
Overall Chance: ~33% ( The results I get are around 33.2% – 34.2% )\

Scenario 2:
Initial Chance: 40%
Increment on failure: 12%
Max no. of increments: 5 (which can increase the chance up to 100%)
Overall Chance: ~49% ( The results I get are around 49.1% – 50.1% )\

Scenario 3:
Initial Chance: 0%
Increment on failure: 15%
Max no. of increments: 5 (which can increase the chance up to 75%)
Overall Chance: ~25% ( The results I get are around 25.1% – 25.6% )\

I'm trying to obtain the overall chance by using a formula rather than running simulations. If there are already some concepts or formulas related to this problem I could start my research with would be greatly appreciated.

One more thing I'd gladly pay for a solution to this problem. Its not a homework or anything just something I'm writing for myself.

Best Answer

The problem you describe seems to be suitable to be analyzed as a Markov chain. To define a particular Markov chain, you want to have a state space, which is a set of discrete possible states numbered $1, 2, 3, \ldots, M$; you want to identify a sequences of discrete steps forward in time; and you want to say, if you are in state number $i$ at some point in time, what the probability is that you will be in state number $j$ at the next time step. For every $i$ and $j$ in $1, 2, 3, \ldots, M$ there is such a probability (which may be zero), and this probability stays the same at every time.

The way this can model the increasing chances of success that you have defined is by the transition from one state to another. Let state $1$ be the starting state, state $2$ be a state you reach after your first failure, state $3$ be a state you reach after your second failure, and so forth.

If you have decided that after $f$ failures the probability of success stops increasing and you just continue trials at that same probability until you succeed, then you only need to define states up to state number $M = f + 1.$ In each state $i$ you have some probability of success (which means a transition to state $1$) and some probability of failure (which means a transition to state $i + 1$, except in state $M$ where you stay in state $M$).

You arrange the transition probabilities in a transition matrix so that the cell in row $i$ and column $j$ is the probability of transition to state $j$ if you are in state $i.$ For Scenario $3$, for example, the transition matrix is

$$ Q = \begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ 0.15 & 0 & 0.85 & 0 & 0 & 0 \\ 0.3 & 0 & 0 & 0.7 & 0 & 0 \\ 0.45 & 0 & 0 & 0 & 0.55 & 0 \\ 0.6 & 0 & 0 & 0 & 0 & 0.4 \\ 0.75 & 0 & 0 & 0 & 0 & 0.25 \end{pmatrix}. $$

If you want to know the probability of landing in state $j$ after $n$ timesteps, starting in state $i$, you compute the $n$th power of the transition matrix, $Q^n.$

In general a Markov chain lets you specify a probability distribution for your initial state, but in your case your starting state is state $1$ with $100\%$ probability, so the initial state vector is $s_0=\begin{pmatrix}1 & 0 & 0 & 0 & 0 & 0\end{pmatrix}.$ The probability distribution after $n$ timesteps then is the vector $s_0 Q^n.$

The stationary distribution of the Markov chain is a probability distribution $s$ among the states that stays the same after any number of timesteps, that is, $s = sQ = sQ^2 = sQ^3,$ etc. At least in the examples you are considering, if you start in state $1$ and continue taking time steps indefinitely, the probability distribution among the states will converge to the stationary distribution. There is an algorithm to find the stationary distribution, not a simple one-line formula but still a deterministic algorithm taking a limited number of steps, not requiring simulation. A calculator I found on line (actually a downloadable Excel file) calculated the steady-state vector as $$\begin{pmatrix}0.25337 & 0.25337 & 0.21537 & 0.15076 & 0.08292 & 0.04422\end{pmatrix},$$ which says that if you continue long enough, the percentage of times you will be in state $1$ among all time steps will converge to $25.337\%.$ Since being in state $1$ means you had a success in the previous time step (except in the initial state), I think it is reasonable to say that in the long run this process will have success $25.337\%$ of the time. This matches your simulation results.

Related Question