This is a computational exercise, so think recursively. The current state of coin flipping is determined by the ordered pair $(N,M)$ with $N\ge M\ge 0$. Let the expected number of flips to reach $N$ consecutive heads be $e(N,M)$:
(1) There is a 50% chance the next flip will be heads, taking you to the state $(N,M+1)$, and a 50% chance the next flip will be tails, taking you to the state $(N,0)$. This costs one flip. Therefore the expectation (recursively) is given by
$$e(N,M) = \frac{1}{2} e(N,M+1) + \frac{1}{2} e(N,0) + 1.$$
(2) Base conditions: you have already stipulated that
$$e(N,0) = 2^{N+1}-2$$
and obviously
$$e(N,N) = 0$$
(no more flips are needed).
Here's the corresponding Mathematica program (including caching of intermediate results to speed up the recursion, which effectively makes it a dynamic programming solution):
e[n_, m_] /; n > m > 0 := e[n, m] = 1 + (e[n, m + 1] + e[n, 0])/2 // Simplify
e[n_, 0] := 2^(n + 1) - 2
e[n_, n_] := 0
The program would look similar in other programming languages that support recursion. Mathematically, we can verify that $e(N,M) = 2^{N+1} - 2^{M+1}$ simply by checking the recursion, because it obviously holds for the base cases:
$$2^{N+1} - 2^{M+1} = 1 + (2^{N+1} - 2^{M+2} + 2^{N+1} - 2)/2,$$
which is true for any $M$ and $N$, QED.
More generally, the same approach will establish that $e(N,M) = \frac{p^{-N} - p^{-M}}{1-p}$ when the coin has probability $p$ of heads. The hard part is working out the base condition $e(N,0)$. That is done by chasing the recursion out $N$ steps until finally $e(N,0)$ is expressed in terms of itself and solving:
$$\eqalign{
e(N,0) &= 1 + p e(N,1) + (1-p) e(n,0) \\
&= 1 + p\left(1 + p e(N,2) + (1-p) e(N,0)\right) + (1-p) e(N,0) \\
\cdots \\
&= 1 + p + p^2 + \cdots + p^{N-1} + (1-p)[1 + p + \cdots + p^{N-1}]e(N,0);\\
e(N,0) &= \frac{1-p^N}{1-p} + (1-p^N)e(N,0); \\
e(N,0) &= \frac{p^{-N}-1}{1-p}.
}$$
Person 2 seems to be confusing conditional probability of the next toss with the joint probability of the sequence. Denote the results of the $k$th coin toss by event $H_k$ that occurs iff the result is heads (otherwise, $\neg H_k$) Fairness of the coin is mathematically modeled by the following assumptions:
- $\mathbb{P}(H_k) = 0.5$ for all $k$.
- The events $H_1,H_2,\ldots$ are mutually independent.
As Avraham points out in his answer, these are assumptions. Whether these assumptions form a reasonable model of the real coin tossing process is a physical rather than mathematical question.
Person 2
Person 2 is computing the total probability that all 4 tosses are heads,
\begin{equation}
\mathbb{P}(H_1\cap H_2 \cap H_3 \cap H_4)
\end{equation}
applying the independence assumption, this equals
\begin{equation}
=\mathbb{P}(H_1)\mathbb{P}(H_2)\mathbb{P}(H_3)\mathbb{P}(H_4)
\end{equation}
and applying the first assumption,
\begin{equation}
= 0.5^4 = 0.0625.
\end{equation}
So, person 2 correctly computes the probability of obtaining a sequence of 4 heads. Unfortunately, that is not what they want to know.
Person 1
Person 1 understands that the question is about the probability of the next toss coming out heads, given the history of previous tosses
\begin{equation}
\mathbb{P}(H_4 \mid H_3\cap H_2\cap H_1)
\end{equation}
but, due to the independence assumption, this conditioning on the other events does not change the probability of $H_4$, thus the condition may simply be omitted:
\begin{equation}
=\mathbb{P}(H_4) = 0.5.
\end{equation}
Thinking about the whole sequence
Person 2 was thinking about the sequence of all 4 tosses, while the question was worded in terms of the 4th toss. However, instead of considering the event of the next toss, person 1 could as well have computed the conditional probability of the whole sequence, conditional on what has already been observed
\begin{equation}
\mathbb{P}(H_4\cap H_3\cap H_2\cap H_1 \mid H_3 \cap H_2 \cap H_1)
\end{equation}
Here, one could apply Bayes' theorem or the definition of conditional probability:
\begin{equation}
=\frac{\mathbb{P}((H_4\cap H_3\cap H_2\cap H_1) \cap (H_3 \cap H_2 \cap H_1 ))}{\mathbb{P}(H_3 \cap H_2 \cap H_1)} = \frac{\mathbb{P}(H_4 \cap H_3 \cap H_2 \cap H_1)}{\mathbb{P}(H_3 \cap H_2 \cap H_1)}
\end{equation}
based on independence, this equals
\begin{equation}
=\frac{\mathbb{P}(H_4)\mathbb{P}(H_3)\mathbb{P}(H_2)\mathbb{P}(H_1)}{\mathbb{P}(H_3)\mathbb{P}(H_2)\mathbb{P}(H_1)} = \mathbb{P}(H_4) = 0.5.
\end{equation}
This second computation is unnecessarily complicated, as the computation 'Person 1' already gave the correct probability of the next toss. However, I think this could help convincing person 2 that one does not need to take the total sequence of 4 heads into account.
Best Answer
Assuming you meant a binomial likelihood,
$$ \begin{eqnarray*} \text{Posterior}(\theta) & \propto & \text{Likelihood}(\theta) \times \text{Prior}(\theta) \\ \\ & = & \text{Binomial}(20 \mid 30, \theta) \times \bigg[ \lambda \times \text{Beta}(\theta \mid 20,10) + (1-\lambda) \times \text{Beta}(\theta \mid 20, 20) \bigg] \\ \\ & = & \lambda \times \text{Binomial}(20 \mid 30, \theta) \times \text{Beta}(\theta \mid 20,10) \\[8pt] &&+ (1 - \lambda) \times \text{Binomial}(20 \mid 30, \theta) \times \text{Beta}(\theta \mid 20,20) \\ \\ & = & \lambda { 30 \choose 20} \frac{1}{\text{B}(20, 10)} \theta^{40 - 1} (1-\theta)^{20-1} \\[8pt] &&+ (1-\lambda) {30 \choose 20} \frac{1}{\text{B}(20,20)} \theta^{40-1} (1-\theta)^{30-1} \\ \\ & = & \lambda { 30 \choose 20} \frac{\text{B}(40,20) \text{B}(40,30)}{\text{B}(20, 10) \text{B}(40,20) \text{B}(40,30)} \theta^{40 - 1} (1-\theta)^{20-1} \\[8pt] &&+ (1-\lambda) {30 \choose 20} \frac{\text{B}(40,20) \text{B}(40,30)}{\text{B}(20,20) \text{B}(40,20) \text{B}(40,30)} \theta^{40-1} (1-\theta)^{30-1} \\ \\ & = & \lambda { 30 \choose 20} \frac{ \text{B}(40,20)}{\text{B}(20, 10)} \text{Beta}(\theta \mid 40,20) \\[8pt] &&+ (1- \lambda) { 30 \choose 20} \frac{\text{B}(40,30)}{\text{B}(20, 20)} \text{Beta}(\theta \mid 40,30) \\ \\ & \propto & \lambda \frac{ \text{B}(40,20)}{\text{B}(20, 10)} \text{Beta}(\theta \mid 40,20) \\[8pt] &&+ (1- \lambda) \frac{\text{B}(40,30)}{\text{B}(20, 20)} \text{Beta}(\theta \mid 40,30). \end{eqnarray*} $$ Thus, the new weights $\omega_1, \omega_2$ are $$ \begin{eqnarray*} \omega_1 & = & \left( \lambda \frac{ \text{B}(40,20)}{\text{B}(20, 10)} \right) \left( \lambda \frac{ \text{B}(40,20)}{\text{B}(20, 10)} + (1- \lambda) \frac{\text{B}(40,30)}{\text{B}(20, 20)} \right)^{-1} \\ \omega_2 & = & 1 - \omega_1, \end{eqnarray*} $$ and $$ \text{Posterior}(\theta) = \omega_1 \times \text{Beta}(\theta \mid 40,20) + \omega_2 \times \text{Beta}(\theta \mid 40,30). $$