[Math] How does Elo rating scale with games played

mathematical modelingprobability

I only take two players starting out at 0 Elo and have them play against each other with one player winning all games. Also I consider a pure version of the Elo system without artificially introduced cut-offs (like FIDE's 400 point rule).

The winner is constantly gaining rating while the other player is losing Elo rating. The gain is decreasing because the rating difference of the players increases.

How does (mathematically) the Elo rating scale with the number of games (n) played for large n?

Best Answer

The standard formula for Elo rating change is

$$ \Delta R = K(S-E) $$

(see e.g. Wikipedia), where $R$ is the change in rating, $S$ is the player’s score in the game ($0$, $\frac12$ or $1$), $E$ is the expected score (based on the current ratings of the players), and $K$ is a factor, for the choice of which there are many different conventions (see e.g. Wikipedia). Since you didn’t specify a $K$ factor, I’ll leave it variable.

The expected score based on the player’s rating $R$ and the opponent’s rating $O$ is

$$ E=\frac1{1+10^{(O-R)/400}}\;. $$

In your situation with only two players starting from rating $0$, we will always have $O=-R$, so this becomes

$$ E=\frac1{1+10^{-R/200}}\;. $$

If we focus on the losing player, their score is always $S=0$, so we have the difference equation

$$ \Delta R=-\frac K{1+10^{-R/200}}\;. $$

Approximating this by a differential equation yields

$$ R'(t)=-\frac K{1+10^{-R(t)/200}}\;. $$

Wolfram|Alpha yields a complicated and unenlightening closed form for this. More insight is gained if we neglect the term $1$ in the denominator for large negative $R$, yielding

$$ R'(t)=-K\cdot10^{R(t)/200}\;. $$

The solution is

$$ R(t)=-200\log_{10}\left(\frac{K\log10}{200}t+c\right)\;, $$

so the magnitude of the players’ ratings increases logarithmically.

Related Question