Here's how White (my new name for Player 1) wins in the $u=v=1$ case. The idea is of course for White to force the knight to an edge, where it can then be summarily captured. WLOG let's force the knight to the lower edge (in my coordinate system, that'll be the edge given by $n=1$). It's enough to show that whenever the knight stands at a point $(a,b)$, we can force a situation where its next move will either allow it to be captured, or will place it on a square whose $n$ coordinate is $<b$; that $n$ coordinate can't decrease forever, so White wins.
So suppose the knight starts at $(a,b)$, and the queen starts at $(c,d)$. Here's a painfully explicit strategy which works uniformly.
In case $c\ne a\pm 1$, White plays the queen to $(c,b+2)$ (the fact that $c\ne a\pm 1$ guarantees Black can't capture here). Now four of Black's moves head towards the bottom, so those are no problem. The moves to $(a-1,b+2)$ and $(a+1,b+2)$ are open to capture, so those are out. Thus Black must move to $(a\pm 2,b+1)$. But now White plays to $(a\pm 2,b+2)$, and Black's only safe moves are to squares with an $n$ coordinate of $b-1$, and the knight has been pushed down, establishing all that we need.
In case $c=a\pm 1$, White moves to $(c,b+4)$. Black's only safe non-retreat is then to $(a\pm 2,b+1)$. White plays to $(a\pm 2,b+4)$. Black has four safe moves; two place it on the $b-1$ row, and we are done. The other two place it back on the $b$ row, but such that we are now back in the previous case, so done.
I'm actually not sure how to be fully explicit in the $u=1,v=2$ case, so I'll just explain it conceptually. Roughly, if the two knights are far enough apart, it's like they aren't even part of the same game, leaving us with successive instances of the winning $u=v=1$ case. (I know very little about combinatorial game theory, but this idea of breaking up games into component sub-games is common. You can see the idea, for example, in this paper by Noam Elkies on pawn endgames, and I feel like this is a big part of thinking in Go, where various regions are their own little battles.) The second knight is irrelevant, the queen picks off one, then the other. On the other hand, if the knights are close enough, the queen will be able to attack both at once, and moreover can arrange that with either side to move, leading to a capture of one knight and reduction to the $u=v=1$ case.
Finally, if $u=1,v=3$, here's an initial position of the knights which White can't win. Place them at $(a,b)$, $(a+2,b+1)$ and $(a+4,b+2)$. They all protect each other here, and no matter where White places the queen, one of the "outer" knights has a safe square to move to, and can then just move back on the next move.
Generally, if $v=u+1$, White will usually be able to at least force repeated even trades of a single queen for a single knight, reducing to the winning $u=1,v=2$ case; but with huge numbers of pieces I don't know how to solidly argue this. For arbitrary $u,v$, the space of possibilities is such that I have absolutely no idea what can be said in general.
I computed the nimbers of a few rings, for what it's worth. I don't see any sensible pattern so perhaps the general answer is hopelessly hard. This wouldn't be surprising, because even for very simple games like sprouts starting with $n$ dots no general pattern is known for the corresponding nimbers.
OK so the way it works is that the nimber of a ring $A$ is the smallest ordinal which is not in the set of nimbers of $A/(x)$ for $x$ non-zero and not a unit. The nimber of a ring is zero iff the corresponding game is a second player win -- this is a standard and easy result in combinatorial game theory. If the nimber is non-zero then the position is a first player win and his winning move is to reduce the ring to a ring with nimber zero.
Fields all have nimber zero, because zero is the smallest ordinal not in the empty set. An easy induction on $n$ shows that for $k$ a field and $n\geq1$, the nimber of $k[x]/(x^n)$ is $n-1$; the point is that the ideals of $k[x]/(x^n)$ are precisely the $(x^i)$. In general an Artin local ring of length $n$ will have nimber at most $n-1$ (again trivial induction), but strict inequality may hold. For example if $V$ is a finite-dimensional vector space over $k$ and we construct a ring $k\oplus \epsilon V$ with $\epsilon^2=0$, this has nimber zero if $V$ is even-dimensional and one if $V$ is odd-dimensional; again the proof is a simple induction on the dimension of $V$, using the fact that a non-zero non-unit element of $k\oplus\epsilon V$ is just a non-zero element of $V$, and quotienting out by this brings the dimension down by 1. In particular the ring $k[x,y]/(x^2,xy,y^2)$ has nimber zero, which means that the moment you start dealing with 2-dimensional varieties things are going to get messy. But perhaps this is not surprising -- an Artin local ring is much more complicated than a game of sprouts and even sprouts is a mystery.
Rings like $k[[x]]$ and $k[x]$ have nimber $\omega$, the first infinite ordinal, as they have quotients of nimber $n$ for all finite $n$. As has been implicitly noted in the comments, the answer for a general smooth connected affine curve (over the complexes, say) is slightly delicate. If there is a principal prime divisor then the nimber is non-zero and probably $\omega$ again; it's non-zero because P1 can just reduce to a field. But if the genus is high then there may not be a principal prime divisor, by Riemann-Roch, and now the nimber will be zero because any move will reduce the situation to a direct sum of rings of the form $k[x]/(x^n)$ and such a direct sum has positive nimber as it can be reduced to zero in one move. So there's something for curves. For surfaces I'm scared though because the Artin local rings that will arise when the situation becomes 0-dimensional can be much more complicated.
I don't see any discernible pattern really, but then again the moment you leave really trivial games, nimbers often follow no discernible pattern, so it might be hard to say anything interesting about what's going on.
Best Answer
Your question makes assumptions with which I disagree.
I do not think that strength means choosing winning moves more frequently in theoretically won positions. The positions encountered in chess are not uniformly random, and the positions you encounter depend on previous moves. You might find someone who reliably executes a nontrivial endgame, but who performs poorly in related positions someone else sets up.
Part of chess is giving an imperfect opponent opportunities to make mistakes. Your measure assumes there is no skill involved in playing theoretically lost positions, but in practice there is.
Although it is popular to call chess mathematical, I think many other games such as backgammon allow much deeper mathematical analysis than chess, in part because positions have equities which are not restricted to $\{0,1/2,1\}$, and there are MonteCarlo methods for estimating the values of positions. Serious backgammon players commonly measure skill in error rates expressed as normalized millipoints per move. In my November 20006 column for GammonVillage, I looked at the correspondence between backgammon error rates and Elo rating differences on one backgammon server, concluding, for example, "100 rating points roughly corresponds to 1.8 millipoints per move."