Believe it or not, it's the ties that make this problem really complicated to analyze. For that reason I'm going to ignore ties for now, and assume that all 52 cards have a unique order.
Suppose that the dealer has picked a card out of $n$, and we know (through card counting) that there are $l$ lower cards, and $n-1$ cards left. Then our winning chance is (assuming we pick our best choice of higher/lower):
$$p = \frac{\max(l, n-1-l)}{n-1}$$
Suppose we have seen the dealer's card and know our $p$ to win. Now there are two choices:
- See our card. We have an expected outcome of $pmb$, where $m$ is our multiplier and $b$ is our bet. We don't value the chance of going deeper into the rounds.
- Do not see our card. We have an expected outcome of $\frac{1}{2}b$.
Now as long as $m > 1$ we should always choose option one. That is because
$p$ can never be below $\frac{1}{2}$ (which is easy to see by symmetry - if one choice is < 50% chance to win, the other must be > 50%).
Our average win% in a round with $n$ cards left is:
$$\frac{1}{n}\sum_{l=0}^{n-1}\frac{\max(l, n-1-l)}{n-1} = \frac{1}{n(n-1)}\sum_{l=0}^{n-1}\max(l, n-1-l) = \frac{6n^2-4n-1+(-1)^n}{8n(n-1)}$$
Which for even $n$ simplifies to $\dfrac{3n - 2}{4n - 4}$.
Which gives the following winning percentages for the rounds:
r n p
01 52 0.7549019607843137
02 50 0.7551020408163265
03 48 0.7553191489361702
04 46 0.7555555555555555
05 44 0.7558139534883721
06 42 0.7560975609756098
07 40 0.7564102564102564
08 38 0.7567567567567568
09 36 0.7571428571428571
10 34 0.7575757575757576
We can see that even going into the deep rounds has a negligible effect on our chance to win. Also note that ties only improve on this, a tie is as if the round never happened while slightly improving our chances.
Now, when is it worth to play this game? If you only play a single round and then cash out it's worth it when
$$m > \frac{4\times 52 - 4}{3\times 52 - 2} = 1.324675$$
The most optimal play is to always play the full 9 or 10 rounds and then cash out.
For $9$ rounds your winning chances are $0.08057$, which makes it worth it if:
$$m^9 > \frac{1}{0.08057}$$
$$m > 1.322927$$
TL;DR: advanced strategizing isn't really applicable here. Wait for a day where the multiplier is $\geq 1.35$ to make a noticeable profit. In fact, assuming you can play the game as fast as you want, don't even bother card counting, just repeatedly play only round 1 as the advantage gained from card counting is so minimal, and faster throughput of games makes more money.
You can directly compute the optimal strategy and its expected value with a dynamic program (backward induction).
Consider the possible states of the game, which can be completely described by the (number of -1 cards remaining, number of +1 cards remaining), with 16 possibilities.
Arrange them in a square grid as follows (it might be better on paper if you make it a diamond with (3,3) on the left and (0,0) on the right)
$$\begin{array}{ccccccc}
\stackrel{(0)}{(3,3)} & \rightarrow & \stackrel{(1)}{(3,2)} & \rightarrow & \stackrel{(2)}{(3,1)} & \rightarrow & \stackrel{(3)}{(3,0)}\\
\downarrow && \downarrow && \downarrow && \downarrow\\
\stackrel{(-1)}{(2,3)} & \rightarrow & \stackrel{(0)}{(2,2)} & \rightarrow & \stackrel{(1)}{(2,1)} & \rightarrow & \stackrel{(2)}{(2,0)}\\
\downarrow && \downarrow && \downarrow && \downarrow\\
\stackrel{(-2)}{(1,3)} & \rightarrow & \stackrel{(-1)}{(1,2)} & \rightarrow & \stackrel{(0)}{(1,1)} & \rightarrow & \stackrel{(1)}{(1,0)}\\
\downarrow && \downarrow && \downarrow && \downarrow\\
\stackrel{(-3)}{(0,3)} & \rightarrow & \stackrel{(-2)}{(0,2)} & \rightarrow & \stackrel{(-1)}{(0,1)} & \rightarrow & \stackrel{(0)}{(0,0)}\\
\end{array}
$$
The entries are on top (score if you stop here), bottom (#-1, #+1) remaining in the deck.
The trick is to work backwards from the (0,0) corner and decide at each state whether you want to continue or not. Examples:
- There is no decision at (0,0), it is worth 0.
- At (0,1) the choice is between taking -1, or drawing a card which gets 0. Since drawing is better, we now know (0,1) is also worth 0.
- At (1,0) we take 1 instead of drawing which gets 0.
- At (1,1) a real decision comes up. Stopping is worth 0. Drawing gets you 1/2 chance to move to (1,0) [worth 1] and 1/2 chance to move to (0,1) [worth 0]. So drawing is worth 1/2 on average, and it is optimal to do so.
You can continue filling in all the states to find the optimal strategy. Note that unequal card counts matter: at say (1,2), drawing gives you 1/3 chance to move to (0,2) and 2/3 chance to move to (1,1).
The filled in square looks like:
$$\begin{array}{ccccccc}
\stackrel{17/20}{(3,3)} & \rightarrow & \stackrel{6/5}{(3,2)} & \rightarrow & \stackrel{\mathbf{2}}{(3,1)} & & \stackrel{\mathbf{3}}{(3,0)}\\
\downarrow && \downarrow && &&\\
\stackrel{1/2}{(2,3)} & \rightarrow & \stackrel{2/3}{(2,2)} & \rightarrow & \stackrel{\mathbf{1}}{(2,1)} & \rightarrow^{?} & \stackrel{\mathbf{2}}{(2,0)}\\
\downarrow && \downarrow && \downarrow^{?} &&\\
\stackrel{1/4}{(1,3)} & \rightarrow & \stackrel{1/3}{(1,2)} & \rightarrow & \stackrel{1/2}{(1,1)} & \rightarrow & \stackrel{\mathbf{1}}{(1,0)}\\
\downarrow && \downarrow && \downarrow &&\\
\stackrel{0}{(0,3)} & \rightarrow & \stackrel{0}{(0,2)} & \rightarrow & \stackrel{0}{(0,1)} & \rightarrow & \stackrel{\mathbf{0}}{(0,0)}\\
\end{array}$$
States where you stop have their value bolded. At (2,1) it doesn't matter if you draw or stop.
Since you have made value-maximizing choices at every step including the effects of later choices, Strategy 2 is proven optimal, with value exactly 17/20.
Best Answer
For the original game, let's suppose we have some amount $x\geq 1$, there is at least one card left, and we are considering whether to draw one more card or not.
Therefore the strategy is to always take the first $10$ cards, and take the last one unless it's known to be the joker. This has expectation $93.\overline{54}$, slightly better than your suggested strategy.
In the amended game, you can describe any plausible strategy as follows: in round $i$, if you have not yet seen the joker and currently have $x$, you bet $p_ix$, where $p_1,\ldots,p_{11}$ are fixed values; if you have seen the joker, you bet $x$. (Clearly the right strategy scales, so $p_i$ will be independent of what you currently have.)
Now suppose $0<p_i<1$. Then it is not hard to show that betting $p_ix$ has the same expected outcome as betting $x$ with probability $p_i$, and $0$ otherwise. And so it is clearly better to choose whichever of betting $x$ or betting $0$ gives the greater expectation (unless they are equal, in case it doesn't matter what you do). Thus there is an optimal strategy where $p_i\in\{0,1\}$ for each $i$.
Clearly $p_{11}=0$. Suppose $p_j=0$ for all $j>i$, and you are at round $i$ with $x$, not having seen the joker.
If you bet $0$, your possible payoffs are $2^0x,2^1x,\ldots 2^{11-i}x$, depending on the position of the joker, all equally likely. (If it's next, you get the maximum.)
If you bet $x$, you get $2^{11-i}x/2048=2^{-i}x$ if the joker comes next. Otherwise you double your winnings in all other cases. But your winnings in the other cases were $2^0x,\ldots,2^{10-i}x$, so they become $2^1x,\ldots,2^{11-i}x$.
Thus your possible outcomes in the two cases are almost exactly the same (although they come in a different order). The only real difference is that $2^0x$ (for $p_i=0$) has been replaced by $2^{-i}x$ (for $p_i=1$). Therefore, by backwards induction, the optimal choice is $p_i=0$ for all $i$.