Two players A and B each has a fair coin and they start to toss simultaneously (counted as one round). They toss in $n$ ($\ge 1$) rounds and stop because they have accumulated the same number of heads (could be 0, the case that both of them did not get any head) for the first time. What is the distribution of $n$ and its expectation?
Probability – Solving Coin Toss Problems
probability
Related Solutions
By the standard techniques explained here in a similar context, one can show that the first time $T_A$ when the word A=HTHT is produced has generating function $$ E(s^{T_A})=\frac{s^4}{a(s)},\qquad a(s)=(2-s)(8-4s-s^3)-2s^3, $$ and that the first time $T_B$ when the word B=TTH is produced has generating function $$ E(s^{T_B})=\frac{s^3}{b(s)},\qquad b(s)=(2-s)(4-2s-s^2), $$ The next step would be to develop these as series of powers of $s$, getting $$ E(s^{T_A})=\sum_{n\ge4}p_A(n)s^n,\quad E(s^{T_B})=\sum_{k\ge3}p_B(k)s^k, $$ and finally, to compute the sum $$ P(T_B<T_A)=\sum_{k\ge3}\sum_{n\ge k+1}p_B(k)p_A(n). $$ An alternative solution is to consider the Markov chain on the space of the couples of prefixes of the words A and B and to solve directly the associated first hitting time problem for this Markov chain, as explained, once again, here.
(+1 to Arturo's and Mike's comments.)
Added later Introduce the decomposition into simple elements $$ \frac1{(1-s)b(s)}=\sum_{i=1}^4\frac{c_i}{1-s\gamma_i}. $$ Then, one can show that $$ P(T_B<T_A)=\sum_{i=1}^4\frac{c_i}{a(\gamma_i)}, $$ by decomposing the rational fractions $1/b(s)$ and $1/a(s)$ into simple elements, by relying on the fact that all their poles are simple, and by using some elementary algebraic manipulations.
At this point, I feel it is my duty to warn the OP that to ask question after question on math.SE or elsewhere along the lines of this post is somewhat pointless. Much more useful would be to learn once and for all the basics of the theory of finite Markov chains, for instance by delving into the quite congenial Markov chains by James Norris, and, to learn to manipulate power series, into the magnificent generatingfunctionology by Herbert Wilf.
Here's one way to do it. Let's call the two parties Alice and Bob (as is popular to do in cryptography and theoretical computer science more broadly these days).
Alice and Bob agree on a secure hash function $h$. Alice chooses a random string $r_A$ and Bob chooses a random string $r_B$. Bob tells Alice $r_B$.
Now, Alice flips a coin, call the result $x$. Alice sends $h(x,r_A,r_B)$ to Bob and asks Bob to call the toss. Let's say Bob calls $y$. Then Alice tells Bob $(x,r_A)$ and he can verify himself that $x = y$ by checking that $h(x,r_A,r_B) = h(y,r_A,r_B)$. In this way if Bob called it wrong, then Alice can prove that he was wrong.
Obviously, if Bob calls the coin flip correctly, then the two hashes match. Moreover, it's extremely hard for Alice to cheat because if Bob says "tails" for example when the coin toss was indeed "tails" but Alice wants to trick him into thinking it was "heads", she'd have to come up with a random string $r$ such that $h(H,r,r_B) = h(T,r_A,r_B)$, which is hard by the assumption that $h$ is a secure hash function and the fact that Bob chose $r_B$. Essentially, the purpose of $r_A$ and $h$ are to make Alice "commit" to her initial toss $x$. The point of $r_B$ is so that, without it, Alice might pick some $r_A$ for which she knows another string $r$ which might let her lie.
Best Answer
The problem can be mapped on a one-dimensional discrete random walk. Imagine that you start at 0 and you make a step +1 (for the outcome HT) with probability $1/4$, a step -1 (for the outcome TH) with probability $1/4$, and a step 0 (for the outcomes TT and HH) with probability $1/2$. The question about the number of coin tosses after which both have accumulate the same number of heads is then equivalent to asking after how many steps the random walk returns to the origin.
Let us introduce the random variable $Z_i$ at step $i$ with value +1 (for HT), -1 (for TH), and 0 in the other cases. As described above, it holds that $P(Z_i = 1) =1/4$, $P(Z_i = -1) =1/4$, and $P(Z_i = 0) =1/2$. Furthermore, introduce $S_n = \sum_{i=1}^n Z_i$. $S_n$ denotes the number of heads which player 1 has obtained minus the number of heads which player 2 has obtained after $n$ steps. The sequence $\{S_n\}$ is called random walk. The question about both players having the same number of heads corresponds to the problem of the first return to the origin.
Let us first solve a simpler problem: what is the probability $P_n$ to return after $n$ steps? We should have the same number of $Z$'s having +1 and -1. Each step we have 3 outcomes $Z= \pm 1,0$. In order to return to the origin, we need to have $m$ times $+1$, $m$ times $-1$, and $n-2m$ times $0$ ($0 \leq m \leq n/2$); this event has probability $$ \frac{n!}{m! m! (n-2m)!} P(Z_i = 1)^m P(Z_i = 1)^m P(Z_i = 0)^{n-2m} = \frac{n!}{(m!)^2 (n-2m)! 2^{n+2m}} .$$ The probability to return after $n$ steps is thereby given by $$ P_n = \sum_{m=0}^{\lfloor n/2 \rfloor} \frac{n!}{(m!)^2 (n-2m)! 2^{n+2m} }= {2n \choose n}2^{-2n}.$$ What remains to be done is to obtain the probability for the first return.
There is a quite general relation between the probability $P_n$ for a return after $n$ steps and the probability $\pi_n$ for the first return, $$ P_n= \pi_1 P_{n-1} + \cdots + \pi_n P_0.$$ The interpretation is simple: to return at step $n$ either you have the first return after one step and then return again after $n-1$ steps, or you return the first time after the second step and then again after $n-2$ steps, ..., or you return the first time after $n$ steps ($P_0$ = 1).
The right hand side is a convolution of two probability distributions. Therefore, it is convenient to introduce the generating functions $P(x) = \sum_{n=0}^\infty P_n x^n$ and $\Pi (x) = \sum_{n=0}^\infty \pi_n x^n$ (we set $\pi_0=0$). Multiplying the aforementioned equation by $x^n$ and summing over $n$, we obtain the relation $P(x) = 1 + P(x) \Pi(x)$ (remember $\pi_0 =0$ and $P_0 =1$). We can solve this equation for $$\Pi(x) = \frac{ P(x) -1 }{P(x)}.$$ All what remains to be done is to evaluate $P(x)$. This can be done (binomial theorem) with the result $$ P(x) = \frac{1}{\sqrt{1-x}}$$ and therefore we obtain $$\Pi(x) = 1- \sqrt{1-x}.$$ Expanding this result in a Taylor series, we obtain the probability for the first return (the probability that the game stops after $n$ rounds) $$ \pi_n = \frac{{2n \choose n} }{(2n -1) 2^{2n}} \quad n\geq1.$$
The expected number of rounds can be also calculated $$ \bar n = \sum_{n=1}^\infty \pi_n n \to \infty,$$ i.e., it diverges. The reason is that the asymptotic behavior of $\pi_n$ is given by $$\pi_n \sim \frac{1}{2 \sqrt{\pi} n^{3/2}}.$$
Edit:
The question has arisen about the evaluate the sum leading to $P_n$. Borrowing from the solution of Mike there is an alternative way to get $P_n$ (and thereby proving that the sum really is what I wrote). Instead of looking at the game as having $n$ rounds, we can think that the game has $2n$ rounds where we split the tosses of player 1 and player 2. If player 1 has H then we move $+1/2$ for tail $-1/2$. If player 2 has T then we move $+1/2$ for tail $-1/2$. After two rounds this leads to the same random walk as I described above (HH and TT no move, HT move $+1$, TH move $-1$). The probability $P_n$ to return after $n$ rounds (of the original game) is therefore equivalent to return after $2n$ rounds in the new game. There are $2n \choose n$ possible paths which return after $2n$ rounds because we can choose which $n$ steps which move up (H for player 1 and T for player 2) and in the rest of the steps we have to move down. In total there are $2^{2n}$ paths of length $2n$. The probability to return at the origin after $2n$ steps ($n$ steps in the original game) is given by $$ P_n = {2n \choose n} 2^{-2n}.$$