You may want to familiarize yourself with the concepts of Nash equilibrium and pure and mixed strategies before reading this answer.
Player $1$ could potentially benefit from sometimes bluffing in a mixed strategy, but it seems unlikely, so let's first try to find a Nash equilibrium with pure strategies.
A pure strategy in this case amounts to what you describe: A function that says for each number Player $1$ might have whether to keep or exchange it. We can safely assume that this takes the form of keeping at or above a certain threshold and exchanging below.
If Player $1$ always keeps, then the dealer's best response is to keep the $3$ or $4$ and exchange the $1$ or $2$. In this case, the dealer's winning probability is $1$ if she was dealt the $4$, $\frac23$ if she was dealt the $3$, and $\frac12$ if she was dealt the $1$ or $2$, for a total of $\frac14\left(1+\frac23+\frac12+\frac12\right)=\frac23$. Here are the dealer's winning probabilities depending on the initial deal (Player $1$ is vertical, the dealer is horizontal):
\begin{array}{c|cc}
&1&2&3&4\\\hline
1&&1&1&1\\
2&1&&1&1\\
3&\frac12&\frac12&&1\\
4&0&0&0
\end{array}
If Player $1$ only exchanges the $1$, then the dealer's best response is again to keep $3$ or $4$ and exchange $1$ or $2$. (If the dealer has the $3$, Player $1$ has either the $2$ or the $4$; the $4$ can't be beaten and the $2$ is already beaten, so the dealer should keep.) In this case, if Player $1$ is dealt the $1$, the dealer's winning probability is $\frac12$; otherwise, if the dealer is dealt the $1$ it is again $\frac12$; if she is dealt the $2$ it is $\frac14$; if she is dealt the $3$ it is $\frac12$ and if she is dealt the $4$ it is $1$, for a total of $\frac14\cdot\frac12+\frac14\cdot\frac12+\frac16\left(\frac14+\frac12+1\right)=\frac{13}{24}$. Here again are the dealer's winning probabilities depending on the initial deal:
\begin{array}{c|cc}
&1&2&3&4\\\hline
1&&1&\frac12&0\\
2&1&&1&1\\
3&\frac12&\frac12&&1\\
4&0&0&0
\end{array}
If Player $1$ exchanges the $1$ and $2$, the dealer's best response is again to keep $3$ or $4$ and exchange $1$ or $2$. In this case the dealer's winning probabilities depending on the initial deal are as follows:
\begin{array}{c|cc}
&1&2&3&4\\\hline
1&&1&\frac12&0\\
2&1&&\frac12&0\\
3&\frac12&\frac12&&1\\
4&0&0&0
\end{array}
Now the dealer's overall winning probability is only $\frac5{12}$.
If Player $1$ exchanges the $1$, $2$ and $3$, the dealer's best response is again to keep $3$ or $4$ and exchange $1$ or $2$. In this case the dealer's winning probabilities depending on the initial deal are as follows:
\begin{array}{c|cc}
&1&2&3&4\\\hline
1&&1&\frac12&0\\
2&1&&\frac12&0\\
3&1&1&&0\\
4&0&0&0
\end{array}
The dealer's overall winning probability is again $\frac5{12}$.
Player $1$ can certainly not gain from exchanging the $4$, so that exhausts the possibilities.
Since the dealer's best response is independent of Player $1$'s strategy, Player $1$ can't gain by using a mixed strategy. Player $1$ should exchange the $1$ and $2$ and keep the $4$, and can decide with arbitrary probability whether to exchange the $3$; the dealer's best response is to exchange the $1$ and $2$ and to keep the $3$ and $4$. Player $1$ wins with probability $\frac7{12}$ and the dealer wins with probability $\frac5{12}$.
Incidentally, the strategies that one might intuitively choose without any analysis, namely that either player exchanges the two lower cards and keeps the two higher cards, form an equilibrium solution.
P.S.: Here's a more direct proof that the dealer should exchange $1$ and $2$ and keep $3$ and $4$: If she has the $1$, there's nothing to lose. If she has the $2$, there's also nothing to lose, since the only way she could be winning is against the $1$, and she'll still win against the $1$ after exchanging. Likewise, if she has the $4$, there's nothing to gain, and if she has the $3$, there's also nothing to gain, since the only way she could be losing is against the $4$, and she'll still lose against the $4$ after exchanging.
P.P.S.: To show that Player $1$ should keep $x+1$ if she keeps $x$, it suffices to show that keeping $x+1$ is at least as good as keeping $x$ and exchanging $x$ is at least as good as exchanging $x+1$.
The second part is straightforward: If we exchange, we get the same card no matter which card we had, and surely giving the dealer $x$ is at least as good as giving them $x+1$.
For the first part, we can compare deals pairwise: If the dealer has anything other than $x$ or $x+1$, keeping $x+1$ is at least as good as keeping $x$. If the dealer has $x$ or $x+1$, then keeping $x+1$ (while the dealer has $x$) is at least as good as if the dealer draws (since Player $1$ wins if the dealer doesn't draw), and keeping $x$ (while the dealer has $x+1$) is at most as good as if the dealer draws (since Player $1$ loses if the dealer doesn't draw), with the same chances for the draw. Since keeping $x+1$ is at least as good as keeping $x$ for each pair of deals, it's also at least as good on average.
This argument uses neither the number of cards nor the fact that for $4$ cards the dealer's strategy is independent of Player $1$'s strategy, so it holds for any number of cards (as long as we're comparing pure strategies).
Given a payoff matrix of size n, the gain of a mixed strategy $(p_1, p_2, ... p_n)$ can be expressed as a polynomial (quadratic) function of $(p_1, p_2, ... p_{n-1})$ in the polytope defined by $0\le p_i\le 1$ and $\sum_{1\le i<n}p_i\le 1$.
Finding the optimal strategies is related to find all local maximum of the function : in the general case, you need to evalute the gradient and the eigenvalues of the Hessian matrix (you can read more at https://en.wikipedia.org/wiki/Hessian_matrix#Critical_points)
If you find one maximum inside the polytope, then you have your mixed strategy (there can by only one as the function is quadratic).
If it fails (no maximum inside the polytope) then you need to find maximum on the borders (each part of the border is defined by $p_i=0$ for some $1\le i\le n$) by applying the same method with one less variable. Each border can have its own set of local maximum.
There is always the possibility to have degenerate solution (with eigenvalues = 0) that can imply that the set of optimal strategies is not a set of isolated points (but something larger, like an hyperplan $ p_1=p_2$).
In your example :
$$g(p_1,p_2)=2p_1p_2+4p_1p_3+6p_2p_3 $$ by replacing $p_3=1-p_1-p_2$
$$g(p_1,p_2)=-4p_1^2-8p_1p_2-6p_2^2+4p_1+6p_2$$
the gradient is (first derivatives) $(-8p_1-8p_2+4, -8p_1-12p_2+6)$ that is zero iff $(p_1=0, p_2=\frac{1}{2})$.
The Hessian matrix is a constant matrix (the second derivatives of a quadratic function) and here it's
$$\left[\begin{matrix}
-8 & -8 \\ -8 & -12\\
\end{matrix}\right]$$
The eigenvalues are all negative (as the determinant is positive (32) and the trace is negative (-20)), so it's a global maximum at $(0,\frac{1}{2},\frac{1}{2})$. No need to look at the borders (which can be quite complex).
So there is only one stable and optimal strategy for this game.
Best Answer
The logic of equilibrium here is: you should choose your strategy (i.e. determine your probabilities of picking $5$ and $10$) in such a way that, however your opponent chooses his strategy, your expected gain is not influenced.
Let's say you pick $5$ with probability $p_1$ and your opponent picks $5$ with probability $p_2$. Then, once you fixed your choice of $p_1$, the expectation $\mathbb{E}$ is a linear function in $p_2$: $$\mathbb{E} = 10p_1 + (5 - 15p_1)p_2.$$ In this situation, you see that choosing $p_1 = 1/3$ will guarantee an expectation of $10/3$, regardless of $p_2$.
If, let's say, you choose $p_1 < 1/3$. Then there is the "risk" that your opponent somehow knows your strategy (e.g. via statistics from multiple rounds of the game), and then chooses $p_2 = 0$. You then have $\mathbb{E} = 10p_1 < 10/3$.
If, in the other direction, you choose $p_1 > 1/3$. Then again assume your opponent gets to know your strategy, and then chooses $p_2 = 1$. You then have $\mathbb{E} = 5 - 5p_1 < 10/3$.
Therefore, you see that the "safest" way of playing is to choose $p_1 = 1/3$. It is in that sense that we say it's the "optimal strategy".
I hope this whole logic makes sense to you.
And as an exercise, you may try to solve the general situation: say the two cards are labelled A and B, and your gain has $4$ possibilities: $a, b, c, d$, which correspond to the $4$ combinations AA, AB, BA, BB of your cards. Then what is your optimal strategy?