I think you are attacking the problem using the wrong conceptual frame.
The probability of decoding error for a repetition code of length $n$ (odd) corresponds to the event of majority of bit errors, that is, a Binomial:
$$\delta = \sum_{k=(n+1)/2}^{n} \binom{n}{k} (1-p)^{n-k} p^{k} \tag{1}$$
This gives $\delta$ as a function of $p$ and $n$; and in principle it allows
the inverse calculation ($n$ as a function of given $\delta,p$). But, of course, it can be done only numerically.
A simplification can be done, assuming $n$ is large (then the above sum turns into the integral of a Gaussian density), and that $p=(1-\epsilon)/2$ , with $\epsilon \to 0^+$. But this is just math.
Here's the math.
The Binomial (which counts bit errors) tends to a Gaussian of mean $np$ and variance $n p (1-p)$. And the sum tends to an integral from $n/2$ to $\infty$, (Gaussian tail or Q function), hence, in this asymptotic approximation
$$\delta = Q\left(\frac{n/2-np}{\sqrt{n p (1-p)}}\right) \tag{2}$$
Given $p=(1-\epsilon)/2$, the argument is given by
$$ \sqrt{n} \frac{\epsilon/2}{\sqrt{\epsilon (1- \epsilon)/4}}=\sqrt{n \frac{\epsilon^2}{
1- \epsilon^2}} \tag{3}$$
If we further assume $0<\epsilon \ll 1$ we get
$$\delta = Q( \epsilon \sqrt{n} ) \tag{4}$$
This equation relates $\delta$, $n$, and $\epsilon$. Namely
$$n = \frac{(Q^{-1}(\delta))^2}{\epsilon^2} \tag{5}$$
If you need asymptotics, you need to specify what tends to what. This is no specified in the question body.
If you want to find $n$ such that, as $\epsilon \to 0^+$ the probability of error keeps a finite constant value $\delta$, then you just use $(5)$ - the only conclusion is that $n \epsilon^2$ must be constant.
If instead you are told that $\delta \to 0$, then you can use $({\rm inverf} x)^2 \approx \log[(1-x)/\sqrt{\pi}]$ and $Q^{-1}(a)=\sqrt 2 \, {\rm inverf}(1-2a)$ Then
$$ n \approx\frac{1}{\epsilon^2} 2 \log^2 (2 \delta /\sqrt{\pi}) $$
I think decomposing the error probability in terms of number of flipped bits is the way to go, but the summand is incorrect. $0.7p(0|1)^k$ is the probability of sending the bit $1$ and observing at least $k$ bit flips. This outcome still counts events with $k, k+1,..., 2n+1$ bit errors because you are not putting any restrictions on the remaining $2n+1-k$ bits.
Let $E_k$ denote the event of $k$ bit flips. Then, $f(n) = \sum_{k=n+1}^{2n+1} P(E_k)$.
To find $E_k$, you need to count bit flips exactly; $k$ bit errors and $(2n+1-k)$ correctly received bits:
\begin{align}
P(E_k) = 0.7 P(E_k \vert 1 \text{ sent}) + 0.3 P(E_k \vert 0 \text{ sent}) = \left(\frac{1}{8}\right)^k\left(\frac{7}{8}\right)^{2n+1-k}
\end{align}
So,
$$f(n) = \sum_{k=n+1}^{2n+1}\left(\frac{1}{8}\right)^k\left(\frac{7}{8}\right)^{2n+1-k}$$
Best Answer
Let $X$ denote the total number of flipped bits among the $n$ transmitted bits.
$X$ is a random variable following the binomial distribution $Bin(n, p)$, with average $~E[X] = np~$ and variance (sqaure of standard deviation) $~V[X] = npq~$, where $q = 1-p$.
We get an error when the fraction of the bits (among $n$) that are flipped is over half. Thus we are interested in the scaled random variable $$\begin{align} Y &\equiv \frac{X}n \quad \text{with} & E[Y]&=\frac{E[x]}n = p\\ && V[Y]&=\frac{ V[X]}{n^2} = \frac{pq}{n} \end{align}$$
Therefore, we need the strict $p < \frac{1}2$ to make the whole process work, such that
Slightly more formally, the probability of $Y$ (the fraction of interest) being at any distance $\Delta$ away from the mean $p$ is
$$ Pr\bigg\{ ~ |Y - p| \geq \Delta ~\bigg\} \leq \left(\frac{\sigma}{\Delta} \right)^2 \qquad \to 0 \quad \text{as} \quad n \to \infty$$
This is a common and elementary form of Chebychev inequality.
This is exactly the Law of Large Numbers applied in this specific context: the vanishing variance of the `sample mean' $Y$ results in the average of the sample mean $E[Y]$ approaching the true mean $p$, where $B_k$ are iid Bernouli random variables of two outcomes (success with $p$ failure with $q$), $E[B_k] = p$ for all k, and $X \equiv \sum_{k = 1}^n B_k$.