Let $p$ be the probability of success (here $p = 0.3$), and $q = 1 - p$, the probability of failure (here $q = 0.7$). $X = k$ if there are $k - 1$ trials with exactly $2$ successes and $k - 3$ failures, and the $k$th trial is success.
The probability of exactly $2$ successes in a sequence of $k - 1$ trials is $^{k-1}\text{C}_2\, p^2 q^{k - 3}$ (by using a binomial distribution with $n = k - 1$). Multiplying this with $p$, the probability of success in the $k$th trial, we obtain
$$P(X = k) = ^{k-1}\text{C}_2\, p^3 q^{k - 3},\ k = 3, 4, \ldots$$
$\begin{align}
E[X] & = \sum_{k = 3}^{\infty} kP(X = k)\\
\mu & = \sum_k \dfrac{k(k - 1)(k - 2)}{2} p^3 q^{k - 3}\\
& = \dfrac{p^3}{2}\sum_k k(k - 1)(k - 2) q^{k - 3}\\
\int \mu\, dq & = \dfrac{p^3}{2}\sum_k k(k - 1) q^{k - 2}\\
\iint \mu\, dq\,dq & = \dfrac{p^3}{2}\sum_k k q^{k-1}\\
\iiint \mu\, dq\,dq\,dq & = \dfrac{p^3}{2}\sum_k q^k\\
& = \dfrac{p^3}{2} \left(\dfrac{1}{1 - q}\right)\\
\mu & = \dfrac{p^3}{2}\dfrac{d^3}{dq^3}\left( \dfrac{1}{1 - q} \right)\\
& = \dfrac{p^3}{2}\left( \dfrac{6}{(1 - q)^4} \right)\\
& = \dfrac{p^3}{2}\left( \dfrac{6}{p^4} \right)\\
& = \boxed{\dfrac{3}{p}}
\end{align}$
Thus, the expected number of trials is $\dfrac{3}{p} = \dfrac{3}{0.3} = 10$. Then the expected number of failures is $10 - 3 = 7$.
Since we are looking at an infinite number of trials, after removing a finite prefix of trials, we still have an infinite number of trials. Therefore, we can simply ignore this prefix for the probability of $E$ (even the conditions for $E$ would already be fulfilled by the prefix) and therefore $P(E) = P(E \mid B)$ for any event $B$ that restricts the outcomes of the trials only for a finite prefix.
This hopefully answers your question, but the partial solution you gave above seems rather complicated to me. Thankfully, I was able to find a pdf version of the book via Google. There, the author gives a rather complicated probability for $P(E)$ (not included in your question). That surprised me, because intuitively I thought, $P(E) = 1$.
Thinking about that in more detail, I am now convinced that, indeed, $P(E) = 1$: Let $A$ be the event, that we consecutively have $n$ successes and $m$ failures in $n + m$ trials. Clearly, $P(A) ≠ 0$, because $P(A) = p^n ⋅ q^m$ (assuming $0 < p < 1$).
Now let us iterate these $n + m$ trials and let $E'$ be the event that event $A$ occurs in the first block of $n + m$ trials, or in the second block of $n + m$ trials, or in the third block, etc. Since $P(A) ≠ 0$ and therefore $P(\overline{A}) ≠ 1$ it is easy to see that $P(\overline{E'}) = 0$, since we are looking at an infinite number of trials. Therefore we have $P(E') = 1$.
Since $E' ⊆ E$, it follows that also $P(E) = 1$.
For short: In an infinite sequence of trials you are able to find any finite pattern with probability $1$.
Therefore I suspect that there is a mistake in the solution in the book – unless I made a mistake myself or misunderstood the problem.
Best Answer
We can model this as an absorbing Markov chain with transition matrix
$$ P=\left( \begin{array}{cccc} p & 1-p & 0 & 0 \\ 0 & 1-p & p & 0 \\ 0 & 1-p & 0 & p \\ 0 & 0 & 0 & 1 \\ \end{array} \right). $$ Write $$ P = \begin{pmatrix}Q&R\\\mathbf 0&I\end{pmatrix} $$ where $Q$ is the submatrix of $P$ corresponding to transitions between transient states and $R$ the submatrix of $P$ corresponding to transitions from transient states to the absorbing tahte. Then the expected number of steps before being absorbed when starting in transient state $i$ is given by the $i^{\mathrm{th}}$ entry of $N\mathbf 1$, where $$ N := \sum_{k=0}^\infty Q^k = (I-Q)^{-1}. $$ A straightforward computation yields the result, $$ \frac1{p^2(1-p)}. $$