Let $a$ be the expected number of additional games that need to be played if the two players are tied in wins. In particular, $a$ is our required number, since the players start off tied.
Let $b$ be the expected number of additional games if Player $1$ is leading by $1$. And let $c$ be expected number of additional games if Player $1$ is trailing by $1$. We have the equations
$$\begin{align}a&=1+rb+(1-r)c,\\
b&=1+(1-r)a, \\
c&=1+ra.\end{align}$$
Use the above system of linear equations to find $a$.
Remarks: $1.$ To justify the first equation, note that for sure we will be playing one more game. That's the $1$ in the equation. And we will not be finished. With probability $r$ our expected number of games beyond that will be $b$, and with probability $1-r$ our expected number of games beyond that will be $c$. That yields the first equation. It may be prettier and clearer to write the equation as $a=r(1+b)+(1-r)(1+c)$.
The justifications for the other two equations are similar. Again, one might like to write the second equation as $b=r(1)+(1-r)(a+1)$, and do a similar rewrite of the third equation.
$2.$ As pointed out by Marc van Leeuwen, the argument tacitly assumes that the expectations exist. To show that they do is not difficult. Whatever $r$ is, the probability of two opposite results in a row is $2r(1-r)$, which is $\le 1/2$. So the probability that a game length $2n$ or more is $\le (1/2)^n$, and therefore the expected length is finite.
$3.$ We used a strategy broadly similar to the one used in the related question about the probability that Player $1$ wins. This kind of strategy tends to work more widely for expectations than for probabilities, because of the linearity of expectation.
The key point about the Elo rating is that it is related to the log-odds of players winning games.
It assumes that there is a relationship across players, so that (ignoring the possibility of draws) if Player B is $10$ times as likely to beat Player A as Player A is to be beat Player $B$, and Player C is $10$ times as likely to beat Player B as Player B is to beat Player C, then Player C is $100$ times as likely to beat Player A as Player A is to beat Player C.
The Elo rating is scaled so that (ignoring the possibility of draws) if Player B is $10$ times as likely to beat Player A as Player A is to beat Player B then the Elo rating of Player B should be $400$ higher than the Elo rating of Player A. Combining this with the earlier assumption has the result that, if Player C is $100$ times as likely to beat Player A as Player A is to beat Player C, then the Elo rating of Player C should be $800$ higher than the Elo rating of Player A: each linear increase in the difference of Elo ratings of $400$ multiplies the odds of the better player winning by a factor of $10$, so this is a logarithmic relationship.
Putting these together means that the prediction based on Elo ratings $R_A$ and $R_B$ gives $$400 \log_{10}(\text{Odds}(\text{B beats A})) = {R_B-R_A} $$ and that implies $$\text{Odds}(\text{B beats A}) = \dfrac{\Pr(\text{B beats A})}{\Pr(\text{A beats B})} = 10^{(R_B-R_A)/400} $$ and combining these with ${\Pr(\text{B beats A})}+{\Pr(\text{A beats B})}=1$ would give a probability prediction $$\Pr(\text{B beats A}) = \dfrac{10^{(R_B-R_A)/400}}{10^{(R_B-R_A)/400}+1} =\dfrac{1}{1+10^{(R_A-R_B)/400}}$$ and a predicted expected net result for Player B of $$\Pr(\text{B beats A}) - \Pr(\text{A beats B}) = \dfrac{10^{(R_B-R_A)/400}-1}{10^{(R_B-R_A)/400}+1} =\dfrac{1-10^{(R_A-R_B)/400}}{1+10^{(R_A-R_B)/400}}$$
The Elo score then has two further useful features: first a mechanism for adjusting scores when results are not as expected (and a $K$ factor which attempts to balance the desire that incorrect scores should adjust as quickly as possible against a desire not to have too much volatility in scores); and second a method to address competitions which are not just win-lose, by focussing on expected net results from a contest rather than just the odds and probabilities of wins and losses.
Best Answer
The standard formula for Elo rating change is
$$ \Delta R = K(S-E) $$
(see e.g. Wikipedia), where $R$ is the change in rating, $S$ is the player’s score in the game ($0$, $\frac12$ or $1$), $E$ is the expected score (based on the current ratings of the players), and $K$ is a factor, for the choice of which there are many different conventions (see e.g. Wikipedia). Since you didn’t specify a $K$ factor, I’ll leave it variable.
The expected score based on the player’s rating $R$ and the opponent’s rating $O$ is
$$ E=\frac1{1+10^{(O-R)/400}}\;. $$
In your situation with only two players starting from rating $0$, we will always have $O=-R$, so this becomes
$$ E=\frac1{1+10^{-R/200}}\;. $$
If we focus on the losing player, their score is always $S=0$, so we have the difference equation
$$ \Delta R=-\frac K{1+10^{-R/200}}\;. $$
Approximating this by a differential equation yields
$$ R'(t)=-\frac K{1+10^{-R(t)/200}}\;. $$
Wolfram|Alpha yields a complicated and unenlightening closed form for this. More insight is gained if we neglect the term $1$ in the denominator for large negative $R$, yielding
$$ R'(t)=-K\cdot10^{R(t)/200}\;. $$
The solution is
$$ R(t)=-200\log_{10}\left(\frac{K\log10}{200}t+c\right)\;, $$
so the magnitude of the players’ ratings increases logarithmically.