Assuming you meant a binomial likelihood,
$$
\begin{eqnarray*}
\text{Posterior}(\theta) & \propto & \text{Likelihood}(\theta) \times \text{Prior}(\theta) \\ \\
& = & \text{Binomial}(20 \mid 30, \theta) \times \bigg[ \lambda \times \text{Beta}(\theta \mid 20,10) + (1-\lambda) \times \text{Beta}(\theta \mid 20, 20) \bigg] \\ \\
& = & \lambda \times \text{Binomial}(20 \mid 30, \theta) \times \text{Beta}(\theta \mid 20,10) \\[8pt]
&&+ (1 - \lambda) \times \text{Binomial}(20 \mid 30, \theta) \times \text{Beta}(\theta \mid 20,20) \\ \\
& = & \lambda { 30 \choose 20} \frac{1}{\text{B}(20, 10)} \theta^{40 - 1} (1-\theta)^{20-1} \\[8pt]
&&+ (1-\lambda) {30 \choose 20} \frac{1}{\text{B}(20,20)} \theta^{40-1} (1-\theta)^{30-1} \\ \\
& = & \lambda { 30 \choose 20} \frac{\text{B}(40,20) \text{B}(40,30)}{\text{B}(20, 10) \text{B}(40,20) \text{B}(40,30)} \theta^{40 - 1} (1-\theta)^{20-1} \\[8pt]
&&+ (1-\lambda) {30 \choose 20} \frac{\text{B}(40,20) \text{B}(40,30)}{\text{B}(20,20) \text{B}(40,20) \text{B}(40,30)} \theta^{40-1} (1-\theta)^{30-1} \\ \\
& = & \lambda { 30 \choose 20} \frac{ \text{B}(40,20)}{\text{B}(20, 10)} \text{Beta}(\theta \mid 40,20) \\[8pt]
&&+ (1- \lambda) { 30 \choose 20} \frac{\text{B}(40,30)}{\text{B}(20, 20)} \text{Beta}(\theta \mid 40,30) \\ \\
& \propto & \lambda \frac{ \text{B}(40,20)}{\text{B}(20, 10)} \text{Beta}(\theta \mid 40,20) \\[8pt]
&&+ (1- \lambda) \frac{\text{B}(40,30)}{\text{B}(20, 20)} \text{Beta}(\theta \mid 40,30).
\end{eqnarray*}
$$
Thus, the new weights $\omega_1, \omega_2$ are
$$
\begin{eqnarray*}
\omega_1 & = & \left( \lambda \frac{ \text{B}(40,20)}{\text{B}(20, 10)} \right) \left( \lambda \frac{ \text{B}(40,20)}{\text{B}(20, 10)} + (1- \lambda) \frac{\text{B}(40,30)}{\text{B}(20, 20)} \right)^{-1} \\
\omega_2 & = & 1 - \omega_1,
\end{eqnarray*}
$$
and
$$
\text{Posterior}(\theta) = \omega_1 \times \text{Beta}(\theta \mid 40,20) + \omega_2 \times \text{Beta}(\theta \mid 40,30).
$$
Your approach b) is wrong: both the single step updating, in which all data are used together to update the prior and arrive at the posterior, and the Bayesian sequential (also called recursive) updating, in which data are used one at a time to obtain a posterior which becomes the prior of the successive iteration, must give exactly the same result. This is one of the pillars of Bayesian statistics: consistency.
Your error is simple: once you updated the prior with the first sample (the first "Head"), you only have one remaining sample to include in your likelihood in order to update the new prior. In formulas:
$$P(F|HH) =\frac{P(H|H,F)P(F|H)}{P(H|H)} $$
This formula is just Bayes' theorem, applied after the first event "Head" has already happened: since conditional probabilities are probabilities themselves, Bayes' theorem is valid also for probabilities conditioned to the event "Head", and there's nothing more to prove really . However, I found that some times people don't find this result self-evident, thus I give a slightly long-winded proof.
$$P(F|HH) =\frac{P(HH|F)P(F)}{P(HH)}= \frac{P(H|H,F)P(H|F)P(F)}{P(HH)}$$
by the chain rule of conditional probabilities. Then, multiplying numerator and denominator by $P(H)$, you get
$$\frac{P(H|H,F)P(H|F)P(F)}{P(HH)}=\frac{P(H|H,F)P(H|F)P(F)P(H)}{P(HH)P(H)}=\frac{P(H|H,F)P(H)}{P(HH)}\frac{P(H|F)P(F)}{P(H)}=\frac{P(H|H,F)}{P(H|H)}\frac{P(H|F)P(F)}{P(H)}=\frac{P(H|H,F)P(F|H)}{P(H|H)}$$
where in the last step I just applied Bayes' theorem. Now:
$$P(H|H,F)= P(H|F)=0.5$$
This is obvious: conditionally on the coin being fair (or biased), we are modelling the coin tosses as i.i.d.. Applying this same idea to the denominator, we get:
$$P(H|H)= P(H|F,H)P(F|H)+P(H|B,H)P(B|H)=P(H|F)P(F|H)+P(H|B)P(B|H)=0.5\cdot0.\bar{3}+1\cdot0.\bar{6}$$
Finally:
$$P(F|HH) =\frac{P(H|H,F)P(F|H)}{P(H|H)}=\frac{0.5\cdot0.\bar{3}}{0.5\cdot0.\bar{3}+1\cdot0.\bar{6}}=0.2$$
QED
That's it: have fun using Bayesian sequential updating, it's very useful in a lot of situations! If you want to know more, there are many resources on the Internet: this is quite good.
Best Answer
Because the support is discrete (assuming someone tells you something like "I've used either a fair coin or a coin with a known bias of 0.2), this is really just a straight forward application of Bayes' Rule.
Recall
$$ Pr(\theta=0.5 \vert X_1, \dots, X_n) = \dfrac{Pr(X_1, \dots, X_n \vert \theta=0.5)\tau_0}{Pr(X_1, \dots, X_n \vert \theta=0.5)\tau_0 + Pr(X_1, \dots, X_n \vert \theta=0.2)(1-\tau_0)}$$
Because you're talking about coinflips, the likelihood is binomial
$$ Pr(X_1, \dots, X_n \vert \theta) = \binom{n}{k}\theta^k(1-\theta)^{n-k} $$
where $k$ is the number of heads in the sample. Because your space is only two possible values, you need only compute the first quantity and then subtract it from 1 to find the other.
So far as converging to the truth, there will come a point in time when the likelihood dominates the prior. This fact, along with the law of large numbers, should be enough to justify your point, assuming these are the only 2 biases possible.