Well, if you have all the individual probabilities figured out, your answer would be:
$$
P(winning)=P(A>36\mid B=36)+P(A>35\mid B=35)+\cdots+P(A>4\mid B=4)
$$
Since your events are independent, we have
$$
P(winning)=P(A>36)P(B=36)+P(A>35)P(B=35)+\cdots+P(A>4)P(B=4)
$$
Then $P(A>n)=\sum_{k=n+1}^{36} P(A=k)$ so
$$
P(winning)=0\cdot P(B=36)+P(A=36)P(B=35)+\cdots+[P(A=36)+\cdots+P(A=4)]P(B=4)\\
$$
Finally,
$$
P(winning)=\sum_{n=4}^{36}\left(P(B=n)\sum_{k=n+1}^{36} P(A=k)\right).
$$
Your probabilities are correct: in a given round, player 1 has probability $p = 1/6$ of winning and player $2$ has probability $q = 5/36$ of winning. The expectation of each player (assuming you mean winnings) is $pd$ for player 1 and $qd$ for player 2, where $d$ is the amount of money in the pot.
If we now let the game continue until someone wins, a third possibility arises: each round, the probability of passing to the next round is either $1-p$ or $1-q$, depending on whose turn it is.
Let $A$ mean "player 1 wins", let $B$ mean "player 2 wins",
and let $\bar{A}$ mean "player 1 loses" and let $\bar{B}$ mean "player 2 loses" (on a round-by-round basis). From your clarification in your comment, it follows that if player 2 starts, the first round's possible outcomes are $B$ or $\bar{B}$, the second round's possible outcomes are $A$ or $\bar{A}$, and so on, alternately. Thus, the possible game sequences are $\{B, \bar{B}A, \bar{B}\bar{A}B, \bar{B}\bar{A}\bar{B}A, \bar{B}\bar{A}\bar{B}\bar{A}B, \dotsc\}$.
The probability of a sequence like $\bar{B}\bar{A}\bar{B}A$ is the product of the probabilities: $$(1-q)(1-p)(1-q)p.$$
Now, player 2 wins iff the game sequence was one of the following: $\{B, \bar{B}\bar{A}B, \bar{B}\bar{A}\bar{B}\bar{A}B, \dotsc\}$. Thus, the probability of player 2 winning is
$$\sum_{k=0}^\infty [(1-q)(1-p)]^kq = \frac{q}{1-(1-p)(1-q)} = \frac{q}{p + q - pq}$$
by the geometric series formula.
Player 1 wins iff the game sequence was one of the following: $\{\bar{B}A, \bar{B}\bar{A}\bar{B}A, \bar{B}\bar{A}\bar{B}\bar{A}\bar{B}A, \dotsc\}$.
Thus, the probability of player 1 winning is
$$(1-q)\sum_{k=0}^\infty[(1-p)(1-q)]^k p = \frac{p(1-q)}{p+q-pq}.$$
Best Answer
I cannot prove that $P(win) \approx {1 \over \sqrt{n}}$, but here is a proof that:
which of course implies $P(win) \to 0$, answering the OP question.
Proof: For convenience, let $G =$ the OP's original game. Consider some large, fixed $n$. Define:
event $A =$ win $G$ on or before $\sqrt{n}$ turns
event $B =$ win $G$ after $\sqrt{n}$ turns
We will separately bound $P(A), P(B)$ both $\le {1 \over \sqrt{n}}$. The main claim then follows because $P(win) = P(A)+P(B)$.
Lemma: $P(A) \le {1\over \sqrt{n}}$: Imagine a modified game $G'$ where the die is always rolled for all $n$ rounds, and the game result is the first winning or losing roll. This modified game $G'$ is clearly equivalent to the OP's truncated version $G$.
Let random variable $X=$ no. of winning rolls among the initial $\sqrt{n}$ rounds of $G'$. To win on or before $\sqrt{n}$ rounds, it is necessary (though not sufficient) that $X\ge 1$, i.e. $P(A) \le P(X\ge 1)$. Meanwhile, for any non-negative integer r.v.s, we have:
$$P(X\ge 1) = \sum_{k=1}^\infty P(X=k) \le \sum_{k=1}^\infty kP(X=k) = E[X]$$
In this case, by linearity, $E[X] = {1\over n} \sqrt{n} = {1 \over \sqrt{n}}.$ Combining, we have:
$$P(A) \le P(X\ge 1) \le E[X] = {1 \over \sqrt{n}} \;\;\; \square$$
Lemma: $P(B) \le {1 \over \sqrt{n}}$: First, define:
Note that event $B$ requires event $E$, i.e. $P(B) = P(B \cap E) = P(E) P(B \mid E)$.
Now imagine two different modified games. In $G_1$, the game starts with $1$ to $\sqrt{n}$ as the losing numbers and adds one more losing number every round. By construction, $P(\text{win }G_1) = P(B \mid E)$.
In $G_2$, the game starts with $1$ to $\sqrt{n}$ as the losing numbers and the set of losing numbers doesn't change for the rest of the game. Clearly, $G_2$ is easier to win than $G_1$ (e.g. via a sample-point by sample-point dominance argument).
The modified game $G_2$ is not limited to $n$ rounds, but it does terminate with probability $1$. Therefore, the ratio:
$${P(\text{win }G_2) \over P(\text{lose }G_2)} = {P(\text{win }G_2 \mid G_2 \text{ terminates}) \over P(\text{lose }G_2 \mid G_2 \text{ terminates})} = {\text{no. of winning rolls} \over \text{no. of losing rolls}} = {1 \over \sqrt{n}}$$
The easiest proof of the above is to consider $G_2$ as a $3$-state Markov chain with $2$ absorbing states, for win and loss. Alternately one can consider there to be $1+\sqrt{n}$ terminating states, by symmetry all equally likely, and then color exactly $1$ of them as "winning" and the others as "losing".
Combining everything, we have:
$$P(B) = P(E) P(B \mid E) \le P(B \mid E) = P(\text{win }G_1) \le P(\text{win }G_2) = {1 \over 1 + \sqrt{n}} < {1 \over \sqrt{n}} \;\;\; \square$$