Once $A$ has finished playing, ending with an amount $a$, the strategy for $B$ is simple and well-known: Use bold play.
That is, aim for a target sum of $a+\epsilon$ and bet what is needed to reach this goal exactly or bet all, whatever is less. As seen for example here, the probability of $B$ reaching this target is maximized by this strategy and depends only on the initial proportion $\alpha:=\frac{100}{a+\epsilon}\in(0,1)$. (Of course, $B$ wins immediately if $a<100$). While the function $p(\alpha)$ that returns $B$'s winning probability is fractal and depends on the dyadic expansion of the number $\alpha$, we can for simplicyity (or a first approximate analysis) assume that $p(\alpha)=\alpha$: If the coin were fair, we would indeed have $p(\alpha)=\alpha$, and the coin is quite close to being fair.
Also, we drop the $\epsilon$ as $B$ may chose it arbitrarily small. (This is the same as saying that $B$ wins in case of a tie).
In view of this, what should $A$ do?
If $A$ does not play at all, $B$ wins with probability $\approx 1$.
If $A$ decides to bet $x$ once and then stop, $B$ wins if either $A$ loses ($p=0.51$) and $B$ wins immediately or if $A$ wins $p=0.49$ and then $B$ wins (as seen above) with $p(\frac{100}{100+x})\approx \frac{100}{100+x}$. So if $A$ decides beforehand to play only once, she better bet all she has and thus wins the grand prize with probaility $\approx 0.49\cdot(1-p(\frac12))\approx \frac14$.
Assume $A$ wins the first round and has $200$. What is the best decision to do now?
Betting $x<100$ will result in a winning probability of approximately
$$0.49\cdot(1-\frac{100}{200+x})+0.51\cdot(1- \frac{100}{200-x}) $$
It looks like the best to do is stop playing (with winning probability $\approx\frac12$ now).
Alernatively, let us assume instead that $A$ employs bold play as well with a target sum $T>100$. Then the probability of reaching the target is $\approx \frac{100}{T}$, so the total probability of $A$ winning is approximately
$$ \frac{100}T\cdot(1-\frac{100}T)$$
and this is maximized precisely when $T=200$.
This repeats what we suspect from above:
The optimal strategy for $A$ is to play once and try to double, resulting in a winning probability $\approx \frac14$.
Admittedly, the optimality of this strategy for $A$ is not rigorously shown and especially there may be some gains from exploiting the detailed shape of $B$'s winnign probability function, but I am pretty sure this is a not-too-bad approximation.
The analysis in quarague’s answer isn’t correct because it only takes the possibility of doubling once into account, whereas future doubling opportunities in fact increase the expected payoff.
Denote by $x_k$ the expected value of the game for $A$ when $A$ has score $k$ and has the right to challenge and $B$ doesn’t. We can guess that $A$ challenges when the score is $1$, and $B$ accepts, and then check whether this is self-consistent. Under these assumptions, we have
$$
2x_0=x_{-1}+x_1=x_{-1}-2x_{-1}=-x_{-1}
$$
and
$$
2x_{-1}=x_0+x_{-2}=x_0-1\;,
$$
and thus $x_{-1}=-\frac25$, $x_0=\frac15$ and $x_1=\frac45$. The assumptions turn out to be self-consistent, since the expected return for $A$ at score $1$ would only be $\frac12\left(\frac15+1\right)=\frac35\lt\frac45$ without challenging, and the expected return for $B$ upon refusing would be $-1\lt-\frac45$.
It remains to find the optimal strategy and expected return in the initial state of the game, where both players have the right to challenge. Denote these expected returns by $y_k$. Then $y_0=0$ by symmetry, and $y_1$ is $\frac12$ if $A$ doesn’t challenge, $1$ if $A$ challenges and $B$ refuses and $-2x_{-1}=\frac45$ if $A$ challenges and $B$ accepts, so $A$ challenges at score $1$ and $B$ accepts.
To summarize, the initial value of the game is $0$ by symmetry; a player challenges exactly if they have a score of $1$, the other player always accepts, and the value of the game for the player with the right to challenge is $-\frac25$, $\frac15$ and $\frac45$ at a score of $-1$, $0$ and $1$, respectively.
Best Answer
Modeling the process
Define $\mathcal{S}$ as the set of possible values for the Markov chain: $$\mathcal{S} = \{0, 0.5, 1, 1.5, …, 9.5, 10\}$$ Note that $S_0=5$ and $S_n \in \mathcal{S}$ for all $n \in \{0, 1, 2, …\}$. We have $$S_{n+1} = S_n + A_n \quad \forall n \in \{0, 1, 2, ...\} $$ where $$ A_n = \left\{ \begin{array}{ll} (1/2)B_n &\mbox{ if $S_n \notin \{0, 10\}$} \\ 0 & \mbox{ otherwise} \end{array} \right.$$ where $\{B_n\}_{n=0}^{\infty}$ is an i.i.d. sequence with $P[B_n=1]=P[B_n=-1]=1/2$. Then $$\boxed{E[A_n|S_n=s] = 0 \quad, \forall s \in \mathcal{S}} \quad (Eq. 1) $$
Mean
So for each $n \in \{0, 1, 2, ...\}$ we have \begin{align} E[S_{n+1}] &\overset{(a)}{=} \sum_{s \in \mathcal{S}}E[S_{n+1}|S_n=s]P[S_n=s] \\ &\overset{(b)}{=} \sum_{s \in \mathcal{S}}E[S_n + A_n|S_n=s]P[S_n=s] \\ &= \sum_{s \in \mathcal{S}}E[s + A_n|S_n=s]P[S_n=s] \\ &= \sum_{s \in \mathcal{S}}(s + E[A_n|S_n=s])P[S_n=s] \\ &\overset{(c)}{=} \sum_{s \in \mathcal{S}}sP[S_n=s] \\ &\overset{(d)}{=} E[S_n] \end{align} where (a) holds by the law of total expectation; (b) holds by the fact $S_{n+1}=S_n+A_n$; (c) holds by Eq. (1); (d) holds by definition of expectation. Since $E[S_0]=5$ we conclude: $$\boxed{E[S_n]=5 \quad \forall n \in \{0, 1, 2, … \}}$$
Limiting variance
We know $E[S_n]=5$ for all $n$ and so $$Var(S_n) = E[(S_n-5)^2] = \sum_{s \in \mathcal{S}}(s-5)^2P[S_n=s] $$ Since the process is equally likely to end up at state $0$ or $10$ we have \begin{align} \lim_{n\rightarrow\infty} P[S_n=0] &= 1/2\\ \lim_{n\rightarrow\infty} P[S_n=10] &= 1/2\\ \lim_{n\rightarrow\infty} P[S_n=s] &= 0 \quad \forall s \notin \{0, 10\} \end{align} so $$ \boxed{\lim_{n\rightarrow\infty} Var(S_n) = (0-5)^2(1/2) + (10-5)^2(1/2) = 25} $$
Details on variance
Squaring the equation $S_{n+1} = S_n + A_n$ gives $$S_{n+1}^2 = (S_n+A_n)^2 = S_n^2 + 2S_nA_n + A_n^2 $$ So $$E[S_{n+1}^2|S_n] = S_n^2 + 2S_nE[A_n|S_n] + E[A_n^2|S_n] = S_n^2 + 0 + (1/4)1_{\{S_n \notin\{0, 10\}\}}$$ where $1_{\{S_n \notin\{0, 10\}\}}$ is an indicator function that is 1 if $S_n \notin \{0,10\}$ and is 0 else. So $$E[S_{n+1}^2] = E[S_n^2] + (1/4)P[S_n \notin \{0,10\}]$$ Subtracting 25 from both sides gives $$ Var(S_{n+1}) = Var(S_n) + (1/4)P[S_n \notin \{0,10\}]$$ and $Var(S_0)=0$ so $$ \boxed{Var(S_n) = (1/4)\sum_{i=0}^{n-1} P[S_i \notin \{0,10\}] \quad \forall n\in \{1, 2, 3, ...\} } $$ Since $P[S_i \notin \{0,10\}] = 1$ for $i \in \{0, 1, 2, 3, ..., 9\}$ we have $$\boxed{Var(S_1)=1/4, Var(S_2)=2/4, Var(S_3) = 3/4, ..., Var(S_{10})= 10/4}$$ On the other hand: $$ Var(S_{11}) = 10/4 + (1/4)\underbrace{(1-2(1/2)^{10})}_{P[S_{10}\notin\{0,10\}]}$$ In general, the variance increases as $n\rightarrow\infty$ to approach a limiting value of $25$. It is possible to compute $P[S_i \notin \{0,10\}]$ for all $i$ (for example, by taking powers of a transition probability matrix), but this calculation is more involved.