This is not true in general.
Notation: for each $n$, we make a partition $\{B_n^j\}_{j=1}^m $ of $E_0$ into $m$ cells, where $B_n^j$ is the jth cell in the nth partition.
Let $E_0=[0,1]$ and let $\mu$ be the Lebesgue measure on $\mathbb{R}$. Let $f$ be the indicator function for $[1/2,1]$ (or a smooth approximation thereof). At stage $n$, let $B_n^m = [1/2,1]$ and let $B_n^1$ through $B_n^{m-1}$ be some partition of $[0,1/2)$ where each cell has positive measure. Then, $\mu(E_0)^{-1} \int_{E_0} f d\mu = 1/2$, but $$\frac{1}{m} \sum_{j=1}^m \frac{1}{\mu(B_n^j)}\int_{B_n^j}fd\mu=\frac{1}{m}\frac{1}{\mu(B_n^m)}\int_{B_n^m}fd\mu=\frac{1}{m}\longrightarrow 0.$$
Addendum: of course, the reason why you might expect your statement to hold is our intuition from when we divide $E_0$ into cells which are all equal size. But in this case the result holds true at every stage (not just in the limit), since $\mu(B_n^j)=\frac{\mu(E_0)}{m}$ and a convenient cancellation occurs.
However! There is a very similar problem, basically the dual to your stated problem, where you can use martingale techniques. If you have a filtration of partitions (say, countable at each stage) that get coarser and coarser so that in in the limit the partition is just ${\{\emptyset,E_0\}}$, then the conditional expectations with respect to this filtration form a reverse martingale, and one can apply Lévy's Downwards Theorem (14.4 in Williams' Probability with Martingales) to show that this sequence of conditional expectations converges in the limit to the average on $E_0$ (pointwise almost surely - remember, the conditional expectation is understood as a random variable).
This is not precisely the analogue of your question, where you take an unweighted average of the conditional expectations in each cell, but it is nearby enough that I thought it would be worth mentioning.
A proof, which you kind of started, can be derived from Kolmogorov inequality,
For $n < m$, Kolmogorov inequality applied to the martingale $Y_k-Y_n$, $n \leq k \leq m$, gives that, for all $\epsilon > 0$
$$
P(\max_{n \leq k \leq m} |Y_k - Y_n| > \epsilon) \leq \frac{1}{\epsilon^2} \sum_{k = n}^m \frac{Var(X_k)}{k^2}.
$$
So, for all $\epsilon > 0$
$$
P(\inf_l \max_{n,m \geq l} |Y_m - Y_n| > \epsilon) \leq \lim_{l \rightarrow \infty} P(\max_{n,m \geq l} |Y_m - Y_n| > \epsilon) = 0.
$$
Now take $\epsilon_p \rightarrow 0$. Define
$$
\Omega_p = \{ \inf_l \max_{n,m \geq l} |Y_m - Y_n| \leq \epsilon_p \},
$$ and $\Omega' = \bigcap_p \Omega_p$.
Then $P(\Omega') = 1$ and, for all $\omega \in \Omega'$, $Y_n(\omega)$ is a Cauchy sequence, therefore converges.
Kronecker's Lemma then finishes the proof, as before.
Comment
This does not really "avoid the martingale machinery", though. This argument substitutes martingale convergence by Kolmogorov inequality, which is a maximal inequality for martingales.
It does not seem easy to get away from using the martingale property in one way or another.
The alternative argument proposed by previous answer uses the fact that a series of independent summands that converges in probability must also converge almost surely. The standard proof of this fact is a stopping time argument, that also uses the stronger independence property.
Best Answer
The martingale convergence theorem, for $L^2$-bounded martingales.