Let $e_k$ be the expected number of bets starting from $k$ dollars. Then
\begin{align}
e_{k}&=1+pe_{k+1}+qe_{k-1},\qquad k=1,2,\dots,n-1\\
e_n&=0,\\
e_0&=1+e_1.
\end{align}
The first equation can be written as
$$
e_{k+1}-e_k=(q/p)(e_k-e_{k-1})-1/p
$$
Applying this over and over, you get
\begin{align}
e_{k+1}-e_k
&=(q/p)^k(e_1-e_0)-\sum_{i=0}^{k-1}(q^i/p^{i+1})
\\&=(q/p)^k(-1)-\frac{1-(q/p)^k}{p-q}
\\e_{k+1}-e_k
&=(q/p)^k\frac{2q}{p-q}-\frac1{p-q}
\tag{*}
\end{align}
Now, take equation $(*)$ and sum both sides from $k=1$ to $n-1$. The left hand telescopes to $e_n-e_1=-e_1$, and the right hand side can be simplified. This lets you solve for $e_1$. You can then sum $(*)$ from $k=1$ to $m-1$ to get a formula for $e_m$, for all $m$. The result is
$$
-e_1=\frac{2q}{p-q}\cdot\frac{(q/p)^n-q/p}{q/p-1}-\frac{n-1}{p-q}
$$
$$
e_1=\frac{n-1}{p-q}+2\cdot \frac{(q/p)^{n+1}-(q/p)^2}{(q/p-1)^2}
$$
$$
\bbox[5px, #ddddef, border: solid black 2px]
{e_m=\frac{n-m}{p-q}+2\cdot \frac{(q/p)^{n+1}-(q/p)^{m+1}}{(q/p-1)^2}}
$$
This is only valid for $m\ge 1$, but $e_0$ is simply $1+e_1$.
If $z_k$ is the expected number of returns to zero starting from $k$, then you instead have
\begin{align}
z_{k}&=pz_{k+1}+qz_{k-1},\qquad k=1,2,\dots,n-1\\
z_n&=0,\\
z_0&=1+z_1.
\end{align}
We are counting the number of times you move from $0$ to $1$, which is the same as the number of visits to zero. This is even easier to solve. You instead have
$$
z_{k+1}-z_k=(q/p)^k(z_1-z_0)=(q/p)^k(-1)\tag{**}
$$
Therefore, using a telescopic sum with $(**)$,
$$
-z_1=-\sum_{k=1}^{n-1}(q/p)^k\implies z_1=\frac{(q/p)^n-q/p}{q/p-1}
$$
$$
z_m-z_1=-\sum_{k=1}^{m-1}(q/p)^k\implies \bbox[5px, #ddddef, border: solid black 2px]
{z_m=\frac{(q/p)^n-(q/p)^m}{q/p-1}}
$$
Again, this assumes $m\ge 0$.
Let $B_{n,m}$ be the probability that starting with $\$m$, you hit $\$0$ before you hit $\$n$. This is the classical gambler's ruin problem, whose solution is well known to be
$$
B_{n,m} = \frac{(q/p)^n-(q/p)^m}{(q/p)^n-1}
$$
Letting $N$ be the number of times you reach zero before reaching $n$, then
$$
\bbox[5px, #ddddef, border: solid black 2px]
{
P(N=k)=
\begin{cases}
B_{n,m}\cdot B_{n,1}^{k-1}\cdot (1-B_{n,1}) & k>0 \\
1-B_{n,m} & k = 0
\end{cases}
}
$$
I might have found a solution that invokes Martingale Convergence Theorem, but I'm unsure if this is indeed correct. I welcome anyone to verify and point out the mistakes in my proof, if any. Here, I use the fact that $S_n$ is a martingale, and observe that it can written recursively as follows:
\begin{align*}
S_{n+1} =
\begin{cases}
0, &\text{if $S_n = 0$} \\
S_n - 1, &\text{if $S_n > 0$, and with probability $\frac{1}{2}$} \\
S_n + 1, &\text{if $S_n > 0$, and with probability $\frac{1}{2}$}
\end{cases}
\end{align*}
We suppose for a contradiction that $S_n$ does not go to $0$ eventually with positive probability. We observe that $S_n$ is non-negative, thus we have $S_n \to S$ for some random variable $S$. Since $S_n$ is never zero, the first case never occurs, so $S_n - 1 \to S$ or $S_n + 1 \to S$ as well. This implies that $S_n \to S + 1$ or $S_n \to S - 1$ with positive probability, which clearly contradicts that $S_n \to S$. Thus, we must have $S_n \to S$ almost surely.
Best Answer
The Huygens' formulas change the size of the bet as time goes on. This is not the best model for your description.
You should look into Random Walk. Your problem is directly related to a 1-D random walk and it is discussed very well in that article. This is also related to diffusion in physics because given enough time, a small particle will seemingly move in a still environment. In 1827, Robert Brown noticed that pollen 'danced around' and moved about while floating on completely still water. This is modeled with a 2-D random walk and it is called Brownian Motion.
A 1-D random walk algorithm should be quite easy to simulate to get your player to ruin. No matter how small the probability of loosing is, it will eventually 'diffuse' to ruin, it just may take some time.