There is no simplified description of the Nash equilibrium of this game.
You can compute the best strategy starting from positions where both players are about to win and going backwards from there.
Let $p(Y,O,P)$ the probability that you win if you are at the situation $(Y,O,P)$ and if you make the best choices. The difficulty is that to compute the strategy and probability to win at some situation $(Y,O,P)$, you make your choice depending on the probability $p(O,Y,0)$. So you have a (piecewise affine and contracting) decreasing function $F_{(Y,O,P)}$ such that $p(Y,O,P) = F_{(Y,O,P)}(p(O,Y,0))$, and in particular, you need to find the fixpoint of the composition $F_{(Y,O,0)} \circ F_{(O,Y,0)}$ in order to find the real $p(O,Y,0)$, and deduce everything from there.
After computing this for a 100 points game and some inspecting, there is no function $g(Y,O)$ such that the strategy simplifies to "stop if you accumulated $g(Y,O)$ points or more". For example, at $Y=61,O=62$,you should stop when you have exactly $20$ or $21$ points, and continue otherwise.
If you let $g(Y,O)$ be the smallest number of points $P$ such that you should stop at $(Y,O,P)$, then $g$ does not look very nice at all. It is not monotonous and does strange things, except in the region where you should just keep playing until you lose or win in $1$ move.
2 approaches:
1) Markov Chains. Your problem can be modelled by a Markov chain where each vertex is a state. Your Transition Matrix looks like (assuming the center is an absorbing state):
$$
TM=\begin{pmatrix}
0& 0& 0& 0& 0& 0& 0\\
1/3& 0 & 1/3& 0& 0& 0& 1/3\\
1/3& 1/3& 0& 1/3& 0& 0& 0\\
1/3& 0 & 1/3& 0& 1/3& 0& 0\\
1/3& 0 & 0& 1/3& 0& 1/3& 0\\
1/3& 0 & 0& 0& 1/3& 0& 1/3\\
1/3& 1/3 & 0& 0& 0& 1/3& 0\\
\end{pmatrix}
$$
where i'm labeling the first row/column as the center and the other rows/colums as the rest of the vertices clockwise from the top.
Then to compute the average steps to go back to center from each vertex you can compute:
$$(Id-TM)^{-1}\begin{pmatrix}
0\\
1\\
1\\
1\\
1\\
1\\
1\\
\end{pmatrix}=\begin{pmatrix}
0\\
3\\
3\\
3\\
3\\
3\\
3\\
\end{pmatrix}$$
So you need 3 steps to get from any vertex back to the center plus the step to get out from the center so finaly you need 4 steps total.
2) Series. As commented above and observing the simetry in the problem, Markov chains look like overkill.
You can compute $$1+\sum_{i=1}^{\infty}iP(i)$$ where $$P(i)=\frac{1}{3}\left(\frac{2}{3}\right)^{i-1}$$ since you have 1 out of three possibilities to return at step $i$ after you have to travelled along the exterior vertices for $i-1$ steps.
Using that $$\sum_{i=1}^{\infty}ik^{i-1}=\left(\sum_{i=0}^{\infty}k^{i}\right)'=\frac{1}{(1-k)^2} $$ you get $$1+\sum_{i=1}^{\infty}iP(i)=4$$ as above.
Best Answer
There are four different classes of vertices: the initial vertex, its neighbours, their neighbours, and the opposite vertex. The matrix of transition probabilities (with the classes in that order) is
$$ \frac15\pmatrix{0&1&0&0\\5&2&2&0\\0&2&2&5\\0&0&1&0}\;. $$
This matrix happens to have a reasonably simple eigensystem. The initial state decomposes as
$$ \pmatrix{1\\0\\0\\0}=\frac1{12}\left(\pmatrix{1\\5\\5\\1}+3\pmatrix{1\\\sqrt5\\-\sqrt5\\-1}+3\pmatrix{1\\-\sqrt5\\\sqrt5\\-1}+5\pmatrix{1\\-1\\-1\\1}\right) $$
with eigenvalues $5^0$, $5^{-\frac12}$,$-5^{-\frac12}$ and $5^{-1}$, respectively. Thus, after $6$ steps, the components are multiplied by $5^0$, $5^{-3}$, $5^{-3}$ and $5^{-6}$, respectively, and the resulting distribution is
$$ 5^{-5}\pmatrix{273\\1302\\1302\\248\\}\;. $$
Thus, the probability to return to the starting point is $\frac{273}{3125}=\frac1{12}+\frac{151}{37500}\approx\frac1{12}+0.004$.