The abstract general recipe looks like this. Given a domain $D$, two sets $A,B \subset D$ and a Markov process with generator $L$, the probability to hit $A$ before hitting $B$ starting from $x$ solves the following problem
\begin{aligned}
(Lu)(x) = 0 & \quad x \not \in A \cup B \\
u(x) = 1, & \quad x \in A \\
u(x) = 0, & \quad x \in B.
\end{aligned}
As is this is not the problem that you want to solve, because your problem has no $B$. To get around this, you can solve a sequence of problems where you "move $B$ to infinity". In the case of the symmetric random walk on $\mathbb{Z}$ where you are interested in hitting zero, you can take $A=\{ 0 \},L=P-I$ where $P$ is the transition probability matrix, and $B=\{ n \}$, where $n$ is a large say positive integer. (Since the walk is not symmetric, the positivity assumption changes matters: the situation is different depending on whether the initial jump is to the left or to the right.) The result is a linear system of equations, which in the positive case looks something like
$$u_0 = 1 \\
(1-p)u_0 - u_1 + pu_2 = 0 \\
(1-p)u_1 - u_2 + pu_3 = 0 \\
\dots \\
(1-p)u_{n-2} - u_{n-1} + pu_n = 0 \\
u_n = 0.$$
This can be solved by recurrence relation techniques; for $p \neq 1/2$ the general solution is a combination of exponentials, to which you can adjoin the boundary conditions. In this case when you send $n \to \infty$, you will find that for all $k>0$, $u_k$ converges to zero. On the other hand, if you start on the left and keep the process biased towards moving to the right, then $u_k$ will converge to $1$ instead.
Note that a closely related recipe works for finding the expected time to hit $0$ starting from a point $x$.
What you're doing wrong is that you're double-counting those walks that return to the origin twice or more. Instead of $\binom{2n}{n}$, you may want to use the Catalan numbers to construct the number of paths that return to the origin for the first time after exactly $n$ steps.
Best Answer
Let $T$ denote the first return time to $0$. The generating function $E_0(s^T)$ of $T$ for the random walk starting at $0$ is such that $$E_0(s^T)=sE_1(s^T),$$ where $E_1(s^T)$ is the generating function of $T$ for the random walk starting at $1$. By the same first-step argument, $$E_1(s^T)=\frac12s(1+E_2(s^T)),$$ where $E_2(s^T)$ is the generating function of $T$ for the random walk starting at $2$. By the Markov property, $$E_2(s^T)=E_1(s^T)^2.$$ Solving this system in $E_0(s^T)$, $E_1(s^T)$ and $E_2(s^T)$ yields $$E_0(s^T)=1-\sqrt{1-s^2}.$$ Thus, the generating function of the $k$th return to $0$ starting from $0$ is $$E_0(s^T)^k=\left(1-\sqrt{1-s^2}\right)^k.$$ These functions have series expansions in powers of $s^2$. The probability that the $k$th return to $0$ occurs after exactly $2N$ steps is the coefficient of $s^{2N}$ in $E_0(s^T)^k$, that is, $$ (-1)^N\sum_{i=1}^k(-1)^i{k\choose i}{i/2\choose N}. $$