Let $P_n$ be the probability that after $n$ trials the number of successes is even.
Let $p$ be the probability of success on any one trial. In our problem, $p=2/3$, but we might as well generalize a bit.
The number of successes after $n+1$ trials can be even in two ways: (a) After $n$ trials we had an odd number of successes and we got a success on the $(n+1)$-th trial; or (b) After $n$ trials we had an even number of successes, and we had a failure on the $(n+1)$-th trial. The probability of (a) is $p(1-P_n)$ and the probability of (b) is $(1-p)P_n$. We therefore have the recurrence
$$P_{n+1}=p(1-P_n)+(1-p)P_n=p+(1-2p)P_n.\qquad (\ast)$$
The recurrence $(\ast)$ is linear, and there are general tools for solving such recurrences. But the recurrence is particularly simple, as is the physical situation, so we will use a trick.
It is intuitively clear that if $p(1-p)\ne 0$ and $n$ is large, then $P_n$ should be close to $1/2$. Let $P_n=1/2+y_n$, and substitute in $(\ast)$. There is a lot of cancellation, and we obtain
$$y_{n+1}=y_n(1-2p). \qquad (\ast\ast)$$
Note that $y_0=1/2$, since if there are $0$ trials, for sure there are $0$ successes. Each time we increment $n$ by $1$, $y_n$ gets multiplied by $1-2p$. So the sequence $(y_n)$ is the geometric sequence
$y_n=\frac{1}{2}(1-2p)^n$,
and therefore
$$P_n=\frac{1}{2}(1+(1-2p)^n).$$
If $p=0$ or $p=1$, $P_n$ is completely determined by the parity of $n$. Suppose now that $p\ne 0$ and $p\ne 1$. Then $|1-2p|<1$, so
$(1-2p)^n$ approaches $0$ as $n \to\infty$. Thus $P_n$ indeed has limit $1/2$.
Comments: $1$. The recurrence approach can be used with more complicated problems, such as determining the probability that the number of successes after $n$ trials is a multiple of $3$.
$2$. One can also use an algebraic approach which is basically a rewording of the solution by Didier Piau. Let $F_n(t)=(tp +(1-p))^n$. Expand $F_n(t)$ using the Binomial Theorem. Evaluate $F_n(t)$ at $t=1$ and at $t=-1$, add up. Suppose that $k$ is odd. Then the terms $\binom{n}{k}p^k(1-p)^{n-k}$ and $\binom{n}{k}(-p)^k(1-p)^{n-k}$ cancel. Suppose that $k$ is even. Then the terms $\binom{n}{k}p^k(1-p)^{n-k}$ and $\binom{n}{k}(-p)^k(1-p)^{n-k}$ are equal. It follows that
$$P_n=\frac{1}{2}(F_n(1)+F_n(-1))=\frac{1}{2}(1^n+(1-2p)^n).$$
$3$. When we let $y_n=1/2+F_n$, the recurrence simplified. Consider the general recurrence $P_{n+1}=a+bP_n$, where $b \ne 1$. Make the substitution $F_n=w+y_n$. We get $y_{n+1}=by_n +a+ bw-w$. By setting $w=a/(1-b)$, we get the recurrence $y_{n+1}=by_n$, which is simple to solve.
Let $p$ be the probability of success (here $p = 0.3$), and $q = 1 - p$, the probability of failure (here $q = 0.7$). $X = k$ if there are $k - 1$ trials with exactly $2$ successes and $k - 3$ failures, and the $k$th trial is success.
The probability of exactly $2$ successes in a sequence of $k - 1$ trials is $^{k-1}\text{C}_2\, p^2 q^{k - 3}$ (by using a binomial distribution with $n = k - 1$). Multiplying this with $p$, the probability of success in the $k$th trial, we obtain
$$P(X = k) = ^{k-1}\text{C}_2\, p^3 q^{k - 3},\ k = 3, 4, \ldots$$
$\begin{align}
E[X] & = \sum_{k = 3}^{\infty} kP(X = k)\\
\mu & = \sum_k \dfrac{k(k - 1)(k - 2)}{2} p^3 q^{k - 3}\\
& = \dfrac{p^3}{2}\sum_k k(k - 1)(k - 2) q^{k - 3}\\
\int \mu\, dq & = \dfrac{p^3}{2}\sum_k k(k - 1) q^{k - 2}\\
\iint \mu\, dq\,dq & = \dfrac{p^3}{2}\sum_k k q^{k-1}\\
\iiint \mu\, dq\,dq\,dq & = \dfrac{p^3}{2}\sum_k q^k\\
& = \dfrac{p^3}{2} \left(\dfrac{1}{1 - q}\right)\\
\mu & = \dfrac{p^3}{2}\dfrac{d^3}{dq^3}\left( \dfrac{1}{1 - q} \right)\\
& = \dfrac{p^3}{2}\left( \dfrac{6}{(1 - q)^4} \right)\\
& = \dfrac{p^3}{2}\left( \dfrac{6}{p^4} \right)\\
& = \boxed{\dfrac{3}{p}}
\end{align}$
Thus, the expected number of trials is $\dfrac{3}{p} = \dfrac{3}{0.3} = 10$. Then the expected number of failures is $10 - 3 = 7$.
Best Answer
Direct enumeration: Clearly, if $n$ trials are needed to obtain $r$ successes, then the final trial must be the observation of the $r^{\rm th}$ success. For if not, then either the $r^{\rm th}$ success occurs before or after the $n^{\rm th}$ trial; in the former case, then there was no need to continue the trials because the $r^{\rm th}$ success was already observed, and in the latter, we cannot stop because the $r^{\rm th}$ success is yet to be observed.
Of the previous $n-1$ trials, there are $\binom{n-1}{r-1}$ ways that we could have observed the other $r-1$ successes in some order. Since the outcomes of all trials are independent, and $r-1$ of these trials are successes (and $n-r$ of these trials are failures), the resulting probability is $$\binom{n-1}{r-1} p^{r-1} (1-p)^{n-r}, \quad n = r, r+1, r+2, \ldots.$$ But this considers only the probability of all but the last trial. So for the last trial, which was a success with probability $p$, we get the desired probability $$\binom{n-1}{r-1} p^r (1-p)^{n-r}, \quad n = r, r+1, r+2, \ldots.$$
This probability distribution is known as the negative binomial distribution, and the above probability mass function gives the probability that $N = n$ trials are needed to observe the $r^{\rm th}$ success in a series of independent Bernoulli trials with success probability $p$.