You can directly compute the optimal strategy and its expected value with a dynamic program (backward induction).

Consider the possible states of the game, which can be completely described by the (number of -1 cards remaining, number of +1 cards remaining), with 16 possibilities.

Arrange them in a square grid as follows (it might be better on paper if you make it a diamond with (3,3) on the left and (0,0) on the right)

$$\begin{array}{ccccccc}
\stackrel{(0)}{(3,3)} & \rightarrow & \stackrel{(1)}{(3,2)} & \rightarrow & \stackrel{(2)}{(3,1)} & \rightarrow & \stackrel{(3)}{(3,0)}\\
\downarrow && \downarrow && \downarrow && \downarrow\\
\stackrel{(-1)}{(2,3)} & \rightarrow & \stackrel{(0)}{(2,2)} & \rightarrow & \stackrel{(1)}{(2,1)} & \rightarrow & \stackrel{(2)}{(2,0)}\\
\downarrow && \downarrow && \downarrow && \downarrow\\
\stackrel{(-2)}{(1,3)} & \rightarrow & \stackrel{(-1)}{(1,2)} & \rightarrow & \stackrel{(0)}{(1,1)} & \rightarrow & \stackrel{(1)}{(1,0)}\\
\downarrow && \downarrow && \downarrow && \downarrow\\
\stackrel{(-3)}{(0,3)} & \rightarrow & \stackrel{(-2)}{(0,2)} & \rightarrow & \stackrel{(-1)}{(0,1)} & \rightarrow & \stackrel{(0)}{(0,0)}\\
\end{array}
$$
The entries are on top (score if you stop here), bottom (#-1, #+1) remaining in the deck.

The trick is to work backwards from the (0,0) corner and decide at each state whether you want to continue or not. Examples:

- There is no decision at (0,0), it is worth 0.
- At (0,1) the choice is between taking -1, or drawing a card which gets 0. Since drawing is better, we now know (0,1)
*is also worth 0*.
- At (1,0) we take 1 instead of drawing which gets 0.
- At (1,1) a real decision comes up. Stopping is worth 0. Drawing gets you 1/2 chance to move to (1,0) [worth 1] and 1/2 chance to move to (0,1) [worth 0]. So drawing is worth 1/2 on average, and it is optimal to do so.

You can continue filling in all the states to find the optimal strategy. Note that unequal card counts matter: at say (1,2), drawing gives you 1/3 chance to move to (0,2) and 2/3 chance to move to (1,1).

The filled in square looks like:
$$\begin{array}{ccccccc}
\stackrel{17/20}{(3,3)} & \rightarrow & \stackrel{6/5}{(3,2)} & \rightarrow & \stackrel{\mathbf{2}}{(3,1)} & & \stackrel{\mathbf{3}}{(3,0)}\\
\downarrow && \downarrow && &&\\
\stackrel{1/2}{(2,3)} & \rightarrow & \stackrel{2/3}{(2,2)} & \rightarrow & \stackrel{\mathbf{1}}{(2,1)} & \rightarrow^{?} & \stackrel{\mathbf{2}}{(2,0)}\\
\downarrow && \downarrow && \downarrow^{?} &&\\
\stackrel{1/4}{(1,3)} & \rightarrow & \stackrel{1/3}{(1,2)} & \rightarrow & \stackrel{1/2}{(1,1)} & \rightarrow & \stackrel{\mathbf{1}}{(1,0)}\\
\downarrow && \downarrow && \downarrow &&\\
\stackrel{0}{(0,3)} & \rightarrow & \stackrel{0}{(0,2)} & \rightarrow & \stackrel{0}{(0,1)} & \rightarrow & \stackrel{\mathbf{0}}{(0,0)}\\
\end{array}$$

States where you stop have their value bolded. At (2,1) it doesn't matter if you draw or stop.

Since you have made value-maximizing choices at every step including the effects of later choices, Strategy 2 is proven optimal, with value exactly 17/20.

More generally, suppose that there are $n$ cards total in the deck, where that $k$ of the cards are considered to be good, while the remaining $n-k$ cards are bad. The goal is the same; the deck is shuffled, and as the cards are flipped from the deck, you have the option to call "stop" at any time. You win as long as the next card dealt is good. What is the optimal strategy?

Furthermore, let us modify the rules to say that, *when there is only one card left, you must call stop*. A player who does not call stop before the last card automatically loses, so this modification makes no difference to an optimal player.

It turns out that *every* strategy is the optimal strategy.

**Theorem:** Every strategy for this game has a probability of winning of $k/n$.

Therefore, to answer your question, the probability of winning is the number of primes in $\{1,\dots,N\}$, divided by $N$.

### Proof 1: Clever insight

Consider this modification of the game. Suppose that, when you call stop, instead of winning when the *next* card dealt is good, you win if the *bottom* card of the deck is good. This is a silly game, because it does not matter when you call "stop." You will always win if and only if the bottom card is good. For the modified game, the probability of winning is $k/n$.

However, this game has the exact same probability for winning as the original. The point is that, after calling "stop", both the next card and the last cards have the same same random chance of being good, because of the symmetry of the shuffled deck. Therefore, it follows the original game has the same probability of winning.

### Proof 2: Induction

We prove, by induction on $n$, that for all $k\in \{0,1,\dots,n\}$, the probability of winning is $k/n$ for all strategies in an $n$-card deck with $k$ good cards.

The base case of $n=1$ is obvious. Assuming that the statement is true for $n$, where $n\ge 1$, we prove it for $n+1$ as follows:
$$
\begin{align}
P(\text{win})
&=P(\text{win}\mid \text{first card is good})P(\text{first card good})
\\&\quad +P(\text{win}\mid \text{first card is bad})P(\text{first card bad})
\\&=\frac{k-1}{n-1}\cdot \frac{k}{n}+\frac{k}{n-1}\cdot \frac{n-k}{n}=\frac{k}n.
\end{align}$$
This completes the proof by induction.

### Proof 3: Martingale

For each $k\in \{0,1,\dots,n-1\}$, let $X_k$ be the proportion of good cards among the unseen cards after $k$ cards have been seen. So, $X_0=k/n$, and $X_1$ is either equal to $k/(n-1)$ or $(k-1)/(n-1)$.

Imagine the player is following a particular strategy. Let $T$ be the turn number that the player calls stop, under that strategy. The proportion of good cards at the time of the stop call is $X_T$, so $X_T$ is the probability the player wins (conditional on $T$). We just need to evaluate $\mathbb E[X_T]$, which is the unconditional probability of the strategy winning.

The sequence $X_0,X_1,\dots,X_{n-1}$ is a martingale, with respect to the natural filtration. To prove this, we must show that $\mathbb E[X_{k+1}\mid X_k]=X_k$ for all $k$. Indeed, given $X_k$, there are two possibilities; either you next draw a good card with probability $X_k$, in which case $X_{k+1}$ decreases to $\frac{(n-k)X_k-1}{n-k-1}$, or you draw a bad card with probability $(1-X_k)$, in which case $X_{k+1}$ increase to $\frac{(n-k)X_k}{n-k-1}$. Therefore,
$$
\mathbb E[X_{k+1}\mid X_k]= X_k\cdot \frac{(n-k)X_k-1}{n-k-1}+\left(1-{X_k}\right)\frac{(n-k)X_k}{n-k-1}={X_k}.
$$
Since the sequence is a martingale, and since $T$ is bounded, we can use the optional stopping theorem to conclude that $\mathbb E[X_T]=\mathbb E[X_0]=k/n$. Therefore, every strategy has a probability of success of $k/n$.

## Best Answer

Partitioning the deck into slices is almost right idea, but you need to exclude the kings from the count. This is because only twenty of the twenty-one slices contain a king in your method, so the slices do not all have the same expected size in your counting scheme.

There are $240$ non-kings, so the expected number of non-kings in each slice is $240/21$. This means the expected number of cards dealt is $5\times (240/21)+5$, where the $+5$ accounts for the five kings which come after the first five slices.

By the way, the question asks for "maximizing your winning chances", but in your solution you compute the "expected number of cards dealt". I see no reason for these to be the same concept, and would expect that that you would actually maximize your chances by choosing the mode, not the mean.