Consider the following events:
$A_{x,y}$: You observe $x$ heads before $y$ consecutive tails.
$B_{x,y}$: You observe $y$ consecutive tails before $x$ heads.
Let $X_{i}$ follow a Geometric(p). This means that $X_{i}$ denotes the number of tail counts until you observe the first head with a biased coin. Observe that, in particular, $P(A_{1,y}) = P(X_{1} < y)$.
In order to solve for the general case, consider $X_{1},\ldots,X_{x}$ i.i.d. Geometric(p). I claim that:
\begin{align*}
P(A_{x,y}) = P(X_{1} < y, \ldots,X_{x} < y) = P(X_{1} < y)^{x}
\end{align*}
In order to understand the claim, you can think of $X_{i}$ as the number of observed tails it takes until you observe a head after having already observed $i-1$ heads in the past. Hence, we only need to find $P(X_{1} < y)$. This is well known and equals $1-(1-p)^{y}$.
Hence, in general $P(A_{x,y}) = (1-(1-p)^{y})^{x}$
For $i=1,\ldots,r,\;$ define event $B_i = \text{"First tail occurs on $i^{th}$ toss"}$. Since the process ends after $r$ successive heads then, given initially that $A=H$, exactly one of these $r$ events must occur.
Note that, given $A=H$, if $B_r$ occurs then $E$ also occurs, which is to say $P(E\mid B_r\cap A=H) = 1$. Also, given $A=H$, if any of the other $B_i$ events occur then it is like we are starting over but, instead, initially given $A=T$. Therefore, conditioning on whether or not $B_r$ occurs,
\begin{eqnarray*}
P(E\mid A=H) &=& P(E\mid B_r\cap A=H)P(B_r\mid A=H) + P(E\mid B_r^c\cap A=H)P(B_r^c\mid A=H) \\
&& \\
&=& p^{r-1} + P(E\mid A=T)(1-p^{r-1}). \qquad\qquad\qquad\qquad\qquad(1) \\
\end{eqnarray*}
$\\$
Next, we can find $P(E\mid A=T)$ by way of $P(E^c\mid A=T)$ and that is found in a similar way to $P(E\mid A=H)$.
For $i=1,\ldots,s,\;$ define event $C_i = \text{"First head occurs on $i^{th}$ toss"}$. Then,
\begin{eqnarray*}
P(E\mid A=T) &=& 1 - P(E^c\mid A=T) \\
&& \\
&=& 1 - [P(E^c\mid C_r\cap A=T)P(C_r\mid A=T) + P(E^c\mid C_r^c\cap A=T)P(C_r^c\mid A=T)] \\
&& \\
&=& 1 - [1\cdot (1-p)^{s-1} + P(E^c\mid A=H)(1-(1-p)^{s-1})] \\
&& \\
&=& 1 - [(1-p)^{s-1} + (1 - P(E\mid A=H))(1-(1-p)^{s-1})] \\
&& \\
&=& (1-(1-p)^{s-1}) P(E\mid A=H). \qquad\qquad\qquad\qquad\qquad(2) \\
&& \\
\end{eqnarray*}
We now proceed to find $P(E)$. Substitute $(2)$ into $(1)$:
\begin{eqnarray*}
P(E\mid A=H) &=& p^{r-1} + (1-p^{r-1}) (1-(1-p)^{s-1}) P(E\vert A=H) \\
&& \\
\therefore P(E\mid A=H) &\times& (p^{r-1} - p^{r-1}(1-p)^{s-1} + (1-p)^{s-1}) \;\; = \;\; p^{r-1} \\
&& \\
P(E\mid A=H) &=& \dfrac{p^{r-1}}{p^{r-1} - p^{r-1}(1-p)^{s-1} + (1-p)^{s-1}} \qquad\qquad\qquad(3) \\
\end{eqnarray*}
Substitute this into $(2)$:
\begin{eqnarray*}
P(E\mid A=T) &=& \dfrac{p^{r-1} - p^{r-1} (1-p)^{s-1}}{p^{r-1} - p^{r-1}(1-p)^{s-1} + (1-p)^{s-1}} \qquad\qquad\qquad(4) \\
\end{eqnarray*}
$\\$
Finally, using results $(3)$ and $(4)$,
\begin{eqnarray*}
P(E) &=& P(E\mid A=H)P(A=H) + P(E\mid A=T)P(A=T) \\
&& \\
&=& \dfrac{p^r + p^{r-1}(1-p) - p^{r-1} (1-p)^s}{p^{r-1} - p^{r-1}(1-p)^{s-1} + (1-p)^{s-1}} \\
&& \\
&=& \dfrac{p^{r-1} - p^{r-1} (1-p)^s}{p^{r-1} - p^{r-1}(1-p)^{s-1} + (1-p)^{s-1}}.
\end{eqnarray*}
Best Answer
At any point in time as you keep tossing the coin, you can be in 5 different states. Either your last throw was a T, or your last throws were a streak of 1, 2, 3 or 4 H. (You're in a sixth state just as you start the game, but we'll skip that one for now.)
In each state, you have some probability of winning, losing, or transitioning into one of the other 4 states. For each state, there is also a probability of eventually winning from that point on. I'll call those evental winning probabilities $P(T)$ and $P(1H), P(2H), P(3H), P(4H)$.
From here we can set up equations (using the law of total probability). For instance, if you're in the 4H state, then there is a 50% chance you'll win on the next throw, and a 50% chance you'll enter state T. Thus $$ P(4H) = \frac12 + \frac12P(T) $$ Similarily, if you're in state T, you have a 50% chance of losing, and a 50% chance of entering state 1H. Thus $$ P(T) = \frac12P(1H) $$ You can similarily set up equations for $P(1H), P(2H)$ and $P(3H)$. You now have five equations and five unknowns, so this may be solved.
Once you've done that, you can look at the state you're in as you start the game, and see that the probability of winning from there is $$ \frac12P(T) + \frac12P(1H) $$