A good question - these are the sorts of things to examine carefully when dealing with probabilities and infinities, both areas where our intuitions can often steer us wrong.
Method 1: Relation to the finite $a$ case.
The basic reason the claimed limit holds is that a match with ruin result against an infinite fund is always just like a match with ruin result against some finite fund. But that's still pretty intuitive; time to dig in.
An "outcome" of the match for particular real positive values $p,q$ and positive integer values $a,z$, with $p+q \leq 1$ and $z \leq a$ is a finite or infinite "valid" sequence of values $v_i \in \{-1,0,+1\}$ with $-1$ representing a loss and $+1$ representing a win. The total winnings after $k$ games is $w_k = z + \sum_{i=1}^k v_i$. A sequence is valid if $0 < w_k < a$ for every positive integer $k$ less than the length of the sequence, and $w_k=0$ or $w_k=a$ if $k$ is the full length of a finite sequence. The gambler is ruined if the sequence is finite, ending with $w_k=0$.
Similarly, an "outcome" of the match with an infinitely rich adversary is a finite or infinite valid sequence of $v_i \in \{-1,0,+1\}$, where with winnings sequence $w_k = z + \sum_{i=1}^k v_i$, we have $0 < w_k$ for every positive $k$ less than the sequence length, or $w_k=0$ if $k$ is the full length of a finite sequence. The gambler is ruined if the sequence is finite.
For the remainder, assume $p$, $q$, and $z$ to be fixed. All comparisons will only involve differences in $a$ or the case of an infinitely rich adversary.
Now if $a_1<a_2$, we can map any ruin outcome for a match with $a=a_1$ to a ruin outcome for a match with $a=a_2$, just by using the exact same finite sequence. A sequence valid with $a=a_1$ is bounded above by $a_1$, so also bounded above by $a_2$, so valid with $a=a_2$. Similarly, we can map the same outcome to a ruin outcome for a match against an opponent with infinite funds, since validity in this case just removes the requirement of any upper bound. The probability of each such ruin outcome is the same in all cases, since it depends on just $p$, $q$, and the sequence $\{v_i\}$.
So if we call $R_a$ the set of all ruin outcomes for a given $a$ and $R_\infty$ the set of all ruin outcomes against an infinitely rich adversary, we have $R_{a_1} \subseteq R_{a_2} \subseteq R_\infty$ and therefore $P(R_{a_1}) \leq P(R_{a_2}) \leq P(R_\infty)$. $P(R_a)$ is the non-decreasing sequence also denoted $q_z$, considered as a function of $a$. Also, any element of $R_\infty$ is a finite sequence, so it is an element of $R_a$ for some sufficiently large $a$. That is, $R_\infty = \bigcup_{a=1}^\infty R_a$. Since we can add probabilities of a countable set of non-intersecting outcome sets:
$$ \begin{align*}
P(R_\infty) &= P\left(\bigcup_{a=1}^\infty R_a\right) = P\left(R_1 \cup \bigcup_{a=2}^\infty (R_a \setminus R_{a-1})\right) \\
&= P(R_1) + \sum_{a=2}^\infty [P(R_a) - P(R_{a-1})] = \lim_{a \to \infty} P(R_a)
\end{align*} $$
since you already know the final limit does exist.
Partial Method 2: Direct recursion relation.
For given real positive values $p$ and $q$ with $p+q \leq 1$ and a non-negative integer $z$, we'll solve for the probability $q_z$ that the gambler will face ruin against an opponent with infinite funds. The variable $a$, or comparisons to the matches with finite $a$, aren't involved at all, so this can give an answer but won't show it equals the claimed limit.
Each game in the match transitions from one state to a similar match with possibly different $z$. The value of $z$ increases by $1$ with probability $p$, decreases by $1$ with probability $q$, or stays the same with probability $1-p-q$. Therefore if $z \geq 1$,
$$ q_z = p q_{z+1} + q q_{z-1} + (1-p-q) q_z $$
$$ (p+q) q_z = p q_{z+1} + q q_{z-1} $$
We also know $q_0 = 1$: when funds are zero, the gambler is already ruined.
The solution set for this linear recursion relation is (usually) a linear combination of solutions of the form $q_z = x^z$ for some complex $x$. Solving for $x$,
$$ (p+q) x^z = p x^{z+1} + q x^{z-1} $$
$$ p x^2 - (p+q) x + q = 0 $$
$$ x = \frac{p+q \pm \sqrt{(p+q)^2 - 4pq}}{2p} = \frac{p+q \pm (p-q)}{2p} $$
$$ x = 1, x = \frac{q}{p} $$
$$ q_z = c_1 + c_2 \left(\frac qp \right)^z $$
Except if $p=q$, we've lost a degree of freedom. In that case, the general solution is any linear function $q_z = c_1+c_2 z$.
If $p<q$, since $\lim_{z \to \infty} (q/p)^z = \infty$ but we must have $0 \leq q_z = c_1 + c_2(q/p)^z \leq 1$ for all positive integers $z$, the solution must have $c_2=0$. Then $q_z$ is constant; since $q_0=1$, $q_z=1$. The gambler will be ruined with probability $1$.
Similarly, if $p=q$, since $0 \leq q_z = c_1+c_2 z \leq 1$ for all $z$, the only possible linear slope is $c_2=0$. Again $q_z$ is constant and must be $1$, and the gambler will be ruined with probability $1$.
But if $p>q$, the recursion equation alone doesn't give a single answer. It can determine $q_z = 1-c+c(q/p)^z$ for some $0 \leq c \leq 1$, but all of these solutions match the recursion equation, and match the initial conditions, and give probabilities $q_z$ between $0$ and $1$. To get farther with this method, we'll need to appeal somehow to a relationship to finite matches, or properties of random walks, or similar. Comparison to the rules modified to have a finite bound on winnings, then taking the limit as that bound increases, is probably one of the easiest ways to completely solve the problem.
Best Answer
This can be proved by appealing to the optional stopping theorem for martingales. If $\ X_r\ $ is the amount the player wins in round $\ r\ ,$ $$ \color{blue}{X_r\in[-a,b\,]} $$ with probability $1$, where $\ a\ $ and $\ b\ $ are positive, $\ \mathbb{E}\big(X_r\big)=0\ ,$ and $\ \mathbb{E}\big(\big|X_r\big| \big)>0\ ,$ then the player's wealth $\ W_r\ $ after round $\ r\ $ is given by $\ W_0=M\ $ and $$ W_r=M+\sum_{i=1}^rX_i $$ for $\ r\ge1\ ,$ provided the process hasn't terminated. Since \begin{align} \Bbb{E}\big(W_{r+1}\,\big|\,W_1,W_2,\dots,W_r\,\big)&=\Bbb{E}\big(W_r+X_{r+1}\,\big|\,W_1,W_2,\dots,W_r\,\big)\\ &=W_r\ , \end{align} the process $\ \big\{W_r\big\}\ $ is a martingale. The process stops on round $\ T\ ,$ given by $$ T=\inf\big\{\,r\ge1\,\big|\,W_t\le0\ \text{ or }\ W_t\ge N\,\big\}\ . $$ The random variable $\ T\ $ is a stopping time for $\ \big\{W_r\big\}\ ,$ and is finite with probability $1$ (will be proved later). Therefore, the optional stopping theorem tells us that $$ \Bbb{E}\big(W_T\big)=\Bbb{E}\big(W_0\big)=M\ . $$ Since $\ 0<W_{T-1}<N\ $ and $\ W_T=W_{T-1}+X_T\ ,$ then $\ -a<W_T<N+b\ $ with probability $1,$ and since $\ W_T\le0\ $ or $\ W_T\ge N\ ,$ then $\ W_T\in(-a,0]\cup[N,N+b)\ $ with probability $1$. Thus if $\ G\ $ is the cumulative distribution function of $\ W_T\ ,$ then \begin{align} M&=\Bbb{E}\big(W_T\big)\\ &=\int_{-a}^0w\,dG(w)+\int_N^{N+b}w\,dG(w)\\ &=\int_{-a}^0w\,dG(w)+\int_N^{N+b}(w-N)\,dG(w)+N\int_N^{N+b}\,dG(w)\\ &=\int_{-a}^0w\,dG(w)+\int_N^{N+b}(w-N)\,dG(w)\\ &\hspace{2em}+N\,\Bbb{P}\big(N\le W_T<N+b\big)\ . \end{align} Now $\ -a\le\int_\limits{-a}^{\hspace{0.8em}0}w\,dG(w)<0\ ,$$\ 0\le\int_\limits{N}^{\hspace{0.8em}N+b}(w-N)\,dG(w)\le b\ ,$ and $\ \Bbb{P}\big(N\le W_T<N+b\big)=1-P_{\text{ruin}}\ ,$ where $ P_{\text{ruin}} $ is the probability of the player's ruin. Therefore we can write the above equation as: $$ M=I_1+I_2+\left (1-P_{\text{ruin}} \right)N\ , $$ where $\ \big|I_1+I_2\big|\le\max(a,b)\ ,$ or $$ P_{\text{ruin}}-\frac{N-M}{N}=\frac{I_1+I_2}{N}\ , $$ from which it follows that $$ \color{blue}{\left|\,P_{\text{ruin}}-\frac{N-M}{N}\right|<\epsilon} $$ whenever $\color{blue}{\ N>\frac{\max(a,b)}{\epsilon}\ }.$
Proof that $\ T\ $ is almost surely finite
By definition, \begin{align} \{\ T=\infty\,\}&=\bigcap_{t=1}^\infty\big\{\,0<W_t<N\,\big\}\\ &=\bigcap_{t=1}^\infty\left\{\,{-}M< \sum_{i=1}^t X_i<N-M\,\right\}\\ &\subseteq\left\{\,{-}M< \sum_{i=1}^t X_i<N-M\,\right\} \end{align} for all $\ t\ .$ Therefore $$ \Bbb{P}(T=\infty)\le \Bbb{P}\left({-}M< \sum_{i=1}^t X_i<N-M\,\right)\tag{1}\label{e1} $$ for all $\ t\ .$ The condition $\ \mathbb{E}\big(\big|X_r\big| \big)>0\ $ guarantees that the random variables $\ X_r\ $ have positive variance $\ \sigma\ ,$ say. By the central limit theorem, therefore, the distributions $\ F_n\ ,$ say, of the random variables $$ \frac{\sum_\limits{i=1}^nX_i}{\sigma \sqrt{n}} $$ converge pointwise to the standard normal distribution as $\ n\rightarrow\infty\ .$ Let $\ \delta>0\ $ be such that $$ \mathcal{N}(0,1;\delta)-\mathcal{N}(0,1;{-}\delta)<\frac{\epsilon}{3}\ , $$ and $\ L\ $ such that \begin{align} \big|F_n({-}\delta)-\mathcal{N}(0,1;{-}\delta)\big|&<\frac{\epsilon}{3} \ \text{ and }\\ \big|F_n(\delta)-\mathcal{N}(0,1;\delta)\big|&<\frac{\epsilon}{3} \end{align} for all $\ n\ge L\ .$ Let $$ t\stackrel{\text{def}}{=}\max\left(L,\left\lceil\left(\frac{M}{\sigma\delta}\right)^2\right\rceil, \left\lceil\left(\frac{N-M}{\sigma\delta}\right)^2\right\rceil\right)\ . $$ Then $\ \frac{M}{\sigma \sqrt{t}}<\delta, \frac{N-M}{\sigma \sqrt{t}}<\delta \ ,$ and \begin{align} \Bbb{P}\left({-}M< \sum_{i=1}^t X_i<N-M\,\right)&=\Bbb{P}\left(\frac{{-}M}{\sigma \sqrt{t}}< \frac{\sum_\limits{i=1}^t X_i}{\sigma \sqrt{t}}<\frac{N-M}{\sigma \sqrt{t}}\,\right)\\ &\le\Bbb{P}\left({-}\delta\le \frac{\sum_\limits{i=1}^t X_i}{\sigma \sqrt{t}}<\delta\,\right)\\ &=F_t(\delta)-F_t({-}\delta)\\ &=F_t(\delta)-\mathcal{N}(0,1;\delta)\\ &\hspace{1em}+\mathcal{N}(0,1;{-}\delta)-F_t({-}\delta)\\ &\hspace{1em}+\mathcal{N}(0,1;\delta)-\mathcal{N}(0,1;{-}\delta)\\ &<\epsilon\ . \end{align} Inequality (\ref{e1}) thus gives $$ \Bbb{P}(T=\infty)\le\epsilon\ , $$ and since $\ \epsilon\ $ can be an arbitrary positive real number, it follows that $\ \Bbb{P}(T=\infty)=0\ .$