Let $p$ be the probability that the coin comes up heads. Suppose that you’ve tossed a head $n$ times. If you play one more time, your expected payoff is $100(n+1)p$; if this is less than the $100n$ that you’ve won so far, you shouldn’t play again. The inequality $100(n+1)p<100n$ boils down to $n(1-p)>p$, or $n>\frac{p}{1-p}$; if $n=\frac{p}{1-p}$, your expected payoff is equal to your current winnings. Thus, if $n\ge\left\lfloor\frac{p}{1-p}\right\rfloor+1$, you should not play again. If $n=\frac{p}{1-p}$, you can play again or not. And if $n<\frac{p}{1-p}$, you should play again. We can get a slightly nicer expression for the cutoff: you should stop when $n$ reaches
$$\left\lfloor\frac{p}{1-p}\right\rfloor+1=\left\lfloor\frac1{1-p}-1\right\rfloor+1=\left\lfloor\frac1{1-p}\right\rfloor\;.$$
For a fair coin, with $p=\frac12$, this says that if $n>1$, you shouldn’t play again: if you toss heads once, you can go ahead and try again or not, as you please, but if you toss heads twice, you should quit. If $p=4/5$, the cutoff is $n>4$: if you’ve managed to toss four heads in a row, you can try again or not, but if you’ve managed five, quit.
To see how this relates to Sam’s calculus-based answer, start with the Maclaurin series for $\ln(1-x)$:
$$\ln p=-\sum_{n\ge 1}\frac{(1-p)^n}n\;,$$
so
$$\frac1{-\ln p}=\frac1{\sum_{n\ge 1}\frac{(1-p)^n}n}=\frac1{1-p}\cdot\frac1{1+\frac12(1-p)+\frac13(1-p)^2+\ldots}\;.$$
Clearly $1<1+\frac12(1-p)+\frac13(1-p)^2+\ldots<\sum_{k\ge 0}(1-p)^k=\frac1p$, so
$$\frac{p}{1-p}<\frac1{-\ln p}<\frac1{1-p}=\frac{p}{1-p}+1\;.$$
Once $A$ has finished playing, ending with an amount $a$, the strategy for $B$ is simple and well-known: Use bold play.
That is, aim for a target sum of $a+\epsilon$ and bet what is needed to reach this goal exactly or bet all, whatever is less. As seen for example here, the probability of $B$ reaching this target is maximized by this strategy and depends only on the initial proportion $\alpha:=\frac{100}{a+\epsilon}\in(0,1)$. (Of course, $B$ wins immediately if $a<100$). While the function $p(\alpha)$ that returns $B$'s winning probability is fractal and depends on the dyadic expansion of the number $\alpha$, we can for simplicyity (or a first approximate analysis) assume that $p(\alpha)=\alpha$: If the coin were fair, we would indeed have $p(\alpha)=\alpha$, and the coin is quite close to being fair.
Also, we drop the $\epsilon$ as $B$ may chose it arbitrarily small. (This is the same as saying that $B$ wins in case of a tie).
In view of this, what should $A$ do?
If $A$ does not play at all, $B$ wins with probability $\approx 1$.
If $A$ decides to bet $x$ once and then stop, $B$ wins if either $A$ loses ($p=0.51$) and $B$ wins immediately or if $A$ wins $p=0.49$ and then $B$ wins (as seen above) with $p(\frac{100}{100+x})\approx \frac{100}{100+x}$. So if $A$ decides beforehand to play only once, she better bet all she has and thus wins the grand prize with probaility $\approx 0.49\cdot(1-p(\frac12))\approx \frac14$.
Assume $A$ wins the first round and has $200$. What is the best decision to do now?
Betting $x<100$ will result in a winning probability of approximately
$$0.49\cdot(1-\frac{100}{200+x})+0.51\cdot(1- \frac{100}{200-x}) $$
It looks like the best to do is stop playing (with winning probability $\approx\frac12$ now).
Alernatively, let us assume instead that $A$ employs bold play as well with a target sum $T>100$. Then the probability of reaching the target is $\approx \frac{100}{T}$, so the total probability of $A$ winning is approximately
$$ \frac{100}T\cdot(1-\frac{100}T)$$
and this is maximized precisely when $T=200$.
This repeats what we suspect from above:
The optimal strategy for $A$ is to play once and try to double, resulting in a winning probability $\approx \frac14$.
Admittedly, the optimality of this strategy for $A$ is not rigorously shown and especially there may be some gains from exploiting the detailed shape of $B$'s winnign probability function, but I am pretty sure this is a not-too-bad approximation.
Best Answer
The expected payoff on the first round is the expectation of a Binomial distribution with parameters $(4,\frac 12)$, i.e., it is $2$. The strategy is to toss again only if the initial number of heads is less than $2$.
Let $N_1$ (resp., $N_2$) be a random variable that models the number of heads on the first (resp., second) round. $N_1$ and $N_2$ are independent, and each has distribution $B(4,\frac 12)$.
The total payoff is $N_1 1_{N_1\geq 2} + N_21_{N_1\leq 1}$, and the expected payoff is $$E[N_1 1_{N_1\geq 2}] + E[N_2] P(N_1\leq 1) = \frac {19}8 = 2.375.$$