I'll ignore the granularity of money. With arbitrarily small bets allowed, player B only has to reach the same amount as A to win with probability arbitrarily close to $1$. So, let's assume that player B wins if the totals are equal.
The simple strategy for player A of betting everything once is pretty good, but not optimal. It wins with probability $0.2499$.
A useful lemma should be that player B might as well play boldly, either betting everything or just enough to win at each step. See Dubbins and Savage, Inequalities for Stochastic Processes: How To Gamble If You Must. The probability of being able to achieve a target $t$ starting from $\alpha t$ is a continuous, increasing function which can be expressed as an infinite sum in terms of the probability of winning each bet and the binary digits of $\alpha$. See exercise 29 of Siegrist, "How To Gamble If You Must."
Some strategies for Player A include aiming for an amount, in which case A might as well bet boldly, too. For example, A could aim for $\$196$ by betting $\$96$, then if that fails trying to double up to $\$128$. From $\$128$ bet $\$68$, etc. The target of $\$196$ lets A win with probability $0.249999385$.
Suppose A chooses a target which can be achieved with probability $p$. Then A wins with probability $p(1-p)$, which is maximized at the probability $p=1/2$ where A wins with probability $0.25$. This corresponds to a target of $\$195.67803788 = \$\frac{100_{10}}{0.1000001011010011110..._2}$: First bet $\$95.67803788$, and if that fails, try to double up the remaining $\$4.32196212$ $5$ times, then aim for the target again, etc.
This leaves open the possibility that there is a better strategy which involves stopping at more than one positive value.
Best Answer
Let \begin{equation*} N:=\inf\{n\ge2\colon X_{n-1}>X_n\}, \end{equation*} where $X_1,X_2,\dots$ are independent random variables uniformly distributed on $[0,1]$. We want to find \begin{equation*} EX_N=\sum_{n=2}^\infty EX_n\,1(X_1\le\cdots\le X_{n-1}>X_n). \tag{1} \end{equation*}
We have \begin{equation*} \begin{aligned} &EX_n\,1(X_1\le\cdots\le X_{n-1}>X_n) \\ &=EX_n\,1(X_1\le\cdots\le X_{n-1}) \\ &-EX_n\,1(X_1\le\cdots\le X_{n-1}\le X_n), \end{aligned} \tag{2} \end{equation*} \begin{equation*} \begin{aligned} &EX_n\,1(X_1\le\cdots\le X_{n-1}) \\ &=EX_n\,P(X_1\le\cdots\le X_{n-1})=\frac12\,\frac1{(n-1)!}. \end{aligned} \tag{3} \end{equation*}
The calculation of $EX_n\,1(X_1\le\cdots\le X_{n-1}\le X_n)$ is more involved than that of $EX_n\,1(X_1\le\cdots\le X_{n-1})$, because $X_n$ and $1(X_1\le\cdots\le X_{n-1}\le X_n)$ are not independent -- in contrast with $X_n$ and $1(X_1\le\cdots\le X_{n-1})$. The main idea in the calculation of $EX_n\,1(X_1\le\cdots\le X_{n-1}\le X_n)$ is to express $X_n$ in terms of indicators, to allow a better blending with the indicator $1(X_1\le\cdots\le X_{n-1}\le X_n)$. Toward that end, note that $X_n=\int_0^{X_n} dx=\int_0^1 dx\,1(X_n>x)$ and $$1(X_n>x)1(X_1\le\cdots\le X_{n-1}\le X_n) \\ =1(X_1\le\cdots\le X_{n-1}\le X_n>x),$$ so that $$X_n\,1(X_1\le\cdots\le X_{n-1}\le X_n) \\ =\int_0^1 dx\,1(X_1\le\cdots\le X_{n-1}\le X_n>x),$$ the latter expression being indeed in terms of the indicators $1(X_1\le\cdots\le X_{n-1}\le X_n>x)$. Hence,
\begin{equation*} \begin{aligned} &EX_n\,1(X_1\le\cdots\le X_{n-1}\le X_n) \\ &=E\int_0^1 dx\,1(X_1\le\cdots\le X_{n-1}\le X_n>x) \\ &=\int_0^1 dx\,P(X_1\le\cdots\le X_{n-1}\le X_n>x) \\ &=\int_0^1 dx\,[P(X_1\le\cdots\le X_{n-1}\le X_n) \\ &\qquad\qquad-P(X_1\le\cdots\le X_{n-1}\le X_n\le x)] \\ &=P(X_1\le\cdots\le X_{n-1}\le X_n) \\ &-\int_0^1 dx\,P(X_1\le\cdots\le X_{n-1}\le X_n\le x) \\ &=\frac1{n!}-\int_0^1 dx\,x^n\frac1{n!} = \frac1{n!}-\frac1{(n+1)!}. \end{aligned} \tag{4} \end{equation*} So, by (1), (2), (3), (4), \begin{equation*} \begin{aligned} EX_N&=\sum_{n=2}^\infty \Big(\frac12\,\frac1{(n-1)!}-\frac1{n!}+\frac1{(n+1)!}\Big) \\ &=\frac e2-1\approx0.359. \end{aligned} \end{equation*}
One may also note that \begin{equation*} \begin{aligned} &EN=E\sum_{n=0}^\infty1(N>n)=\sum_{n=0}^\infty P(N>n) \\ &=\sum_{n=0}^\infty P(X_1\le\cdots\le X_n) =\sum_{n=0}^\infty \frac1{n!}=e\approx2.72. \end{aligned} \end{equation*}
Simulation with Mathematica appears to confirm these results (click on the image below to enlarge it):