The following ingredients are enough to determine the probabilities.
$1.$ If you toss $n$ times, the probability that the highest number is $\le k$ is $\frac{k^n}{6^n}$.
$2.$ If you toss $n$ times, the probability that the highest number is equal to $k$ is $\frac{k^n -(k-1)^n}{6^n}$.
In ($1$) and ($2$), $k$ ranges from $1$ to $6$.
Added: We do the cooking, in case the list of ingredients was not enough. Assume that $m\ge 1$ and $n \ge 1$.
Player A wins if (i) her highest number is a $6$, and B's number is $\le 5$ or (ii) A's highest number is $5$, and B's is $\le 4$, or (iii) A's highest number is $4$, and B's is $\le 3$, or (iv) A's highest number is $3$, and B's is $\le 2$, or (v) A's highest number is $2$, and B's is $\le 1$.
The probability of (i) is
$$\frac{6^m-5^m}{6^m}\cdot \frac{5^n}{6^n}.$$
The probability of (ii) is
$$\frac{5^m-4^m}{6^m}\cdot \frac{4^n}{6^n}.$$
The probability of (iii) is
$$\frac{4^m-3^m}{6^m}\cdot \frac{3^n}{6^n}.$$
The probability of (iv) is
$$\frac{3^m-2^m}{6^m}\cdot \frac{2^n}{6^n}.$$
The probability of (v) is
$$\frac{2^m-1^m}{6^m}\cdot \frac{1^n}{6^n}.$$
Add up.
There are various other ways to compute the probability that A wins. If we look at the probabilities that we are adding, we can see that there are opportunities to simplify the expression for the sum. If we were dealing with $d$-sided dice, then simplification might be worthwhile, but for the case $d=6$ it probably isn't.
I cannot prove that $P(win) \approx {1 \over \sqrt{n}}$, but here is a proof that:
Claim: $P(win) \le {2 \over \sqrt{n}}$
which of course implies $P(win) \to 0$, answering the OP question.
Proof: For convenience, let $G =$ the OP's original game. Consider some large, fixed $n$. Define:
We will separately bound $P(A), P(B)$ both $\le {1 \over \sqrt{n}}$. The main claim then follows because $P(win) = P(A)+P(B)$.
Lemma: $P(A) \le {1\over \sqrt{n}}$: Imagine a modified game $G'$ where the die is always rolled for all $n$ rounds, and the game result is the first winning or losing roll. This modified game $G'$ is clearly equivalent to the OP's truncated version $G$.
Let random variable $X=$ no. of winning rolls among the initial $\sqrt{n}$ rounds of $G'$. To win on or before $\sqrt{n}$ rounds, it is necessary (though not sufficient) that $X\ge 1$, i.e. $P(A) \le P(X\ge 1)$. Meanwhile, for any non-negative integer r.v.s, we have:
$$P(X\ge 1) = \sum_{k=1}^\infty P(X=k) \le \sum_{k=1}^\infty kP(X=k) = E[X]$$
In this case, by linearity, $E[X] = {1\over n} \sqrt{n} = {1 \over \sqrt{n}}.$ Combining, we have:
$$P(A) \le P(X\ge 1) \le E[X] = {1 \over \sqrt{n}} \;\;\; \square$$
Lemma: $P(B) \le {1 \over \sqrt{n}}$: First, define:
- event $E= $ game $G$ is inconclusive (neither won nor lost) after $\sqrt{n}$ rounds
Note that event $B$ requires event $E$, i.e. $P(B) = P(B \cap E) = P(E) P(B \mid E)$.
Now imagine two different modified games. In $G_1$, the game starts with $1$ to $\sqrt{n}$ as the losing numbers and adds one more losing number every round. By construction, $P(\text{win }G_1) = P(B \mid E)$.
In $G_2$, the game starts with $1$ to $\sqrt{n}$ as the losing numbers and the set of losing numbers doesn't change for the rest of the game. Clearly, $G_2$ is easier to win than $G_1$ (e.g. via a sample-point by sample-point dominance argument).
The modified game $G_2$ is not limited to $n$ rounds, but it does terminate with probability $1$. Therefore, the ratio:
$${P(\text{win }G_2) \over P(\text{lose }G_2)} = {P(\text{win }G_2 \mid G_2 \text{ terminates}) \over P(\text{lose }G_2 \mid G_2 \text{ terminates})} = {\text{no. of winning rolls} \over \text{no. of losing rolls}} = {1 \over \sqrt{n}}$$
The easiest proof of the above is to consider $G_2$ as a $3$-state Markov chain with $2$ absorbing states, for win and loss. Alternately one can consider there to be $1+\sqrt{n}$ terminating states, by symmetry all equally likely, and then color exactly $1$ of them as "winning" and the others as "losing".
Combining everything, we have:
$$P(B) = P(E) P(B \mid E) \le P(B \mid E) = P(\text{win }G_1) \le P(\text{win }G_2) = {1 \over 1 + \sqrt{n}} < {1 \over \sqrt{n}} \;\;\; \square$$
Best Answer
Nice question! You can make the probability arbitrarily close to $1$ with a die where the probability of landing on $i$ decreases rapidly. For example, $99\%$ chance of landing on 1, $.99\%$ chance landing on 2, $.0099\%$ chance landing on 3, etc. Then you will have $99\%$ chance of getting a $1$ on the first throw; and on the second throw, you throw away $99\%$ of the rolls because they are $1$s, and of the remaining $1$% of rolls you keep, $99\%$ of them will be 2s, etc. Of course, this means you will waste a LOT of time rolling dice and ignoring the results, so it might not be the most fun game...
Now, if you want to model this formally, we let the result from the die roll be distributed as a variable $X$, letting $X_i$ be the $i$-th roll (not counting the rolls we ignored and re-rolled). Then we have $P(X_i = i) = P(X = i | X\geq i)$; ignoring the too-low rolls is equivalent to conditioning on the event $X \geq i$.
Let $p_i = P(X=i)$. We want a high chance $p$, close to $1$, of having $X_i = i$. So, set $p_1 = p$; then of the remaining probability $1-p$, we want a proportion of $p$ of it on $2$, so we set $p_2 = p(1-p_1)$; and in general $p_i = p(1-(p_1 + p_2 + \cdots + p_{i-1}))$, except for $p_6$, where we don't have any choice but $p_6 = 1 - p_1 - \cdots - p_5$.
Then we have
\begin{align*}P(\text{win}) &= P(X_1 = 1) \cdot P(X_2 = 2) \cdots P(X_6 = 6)\\ &= P(X = 1) \cdot P(X = 2 | X \geq 2) \cdots P(X = 6 | X \geq 6).\end{align*}
So, by design \begin{align*} P(X_i = i | X \geq i) &= P(X = i) / P(X \geq i)\\ &= \frac{p_i}{1 - p_1 - \cdots - p_{i-1}}\\ &= \frac{p(1 - p_1 - \cdots - p_{i-1})}{1 - p_1 - \cdots - p_{i-1}} = p.\end{align*}
and then $P(\text{win}) = p^5$. (note $P(X_6 = 6) = P(X = 6 | X \geq 6) = 1.$)
So, for $p = .99$ we have a $.99^5 \approx 95$% chance of winning.