Assuming uniform integrability of $X^m$, we have that $\mathrm E[X^m]\to \mathrm E[X]$, $m\to\infty$. Therefore, it is enough to show that
$$
\left| (\overline{X^m})_m - \mathrm E[X^m]\right|\to 0, m\to \infty,\tag{1}
$$
almost surely, where $(\overline{X^m})_m = \frac1m \sum_{i=1}^m X_i^m$.
One possibility is to go through concentration inequalities. For example, if the variables are bounded, as in your question, then by the Hoeffding inequality, for any $\varepsilon>0$,
$$
\mathrm P\left(\left| (\overline{X^m})_m - \mathrm E[X^m]\right|>\varepsilon\right)\le e^{-C \varepsilon^2 m}
$$
with some $C>0$. Using the Borel-Cantelli lemma, we easily get $(1)$.
Another possibility is, as I commented, to deduce the uniform convergence
$$
\sup_m \left| (\overline{X^m})_n - \mathrm E[X^m]\right|\to 0, n\to \infty,\tag{2}
$$
from the uniform law of large numbers. However, it seems unlikely that the almost sure convergence can be shown this way; I will only outline the convergence in probability.
Let $F^m$ be the cdf of $X^m$ and $Q^m(t) = \sup\{x\in \mathbb R: F^m(x)<t\}, t\in(0,1)$, be its quasi-inverse (quantile function). Then, as it is well known, $X^m \overset{d}{=} Q^m(U)$, where $U$ is a uniform $[0,1]$ variable. Therefore,
$$
(\overline{X^m})_n \overset{d}{=} \frac1n \sum_{k=1}^n Q^m(U_k),
$$
where $U_1,U_2,\dots$ are iid uniform $[0,1]$ variables. Also it follows from the weak convergence of $X^m\to X^0$ that $Q^m\to Q^0$ pointwise in the continuity points of $Q^0$, hence, almost everywhere on $(0,1)$.
Now let $\Theta = \{m^{-1}, m\ge 1\}\cup \{0\}$ and set $f(t,m^{-1}) = Q^m(t)$, $m\ge 1$, $f(t,0) = Q^0(t)$. Then, as is explained above, $f(t,\theta)$ is continuous in $\theta$ for almost all $t$ (modulo the distribution of $U$). Therefore, assuming existence of integrable majorant of $f(U,m^{-1})=Q^m(U)$ (which is easily seen to be equivalent to uniform integrability of $X^m$), we get that
$$
\sup_{\theta\in \Theta}\left| \frac1n \sum_{k=1}^n f(U_k,\theta) - \mathrm{E}[f(U,\theta)]\right| \to 0, n\to \infty,
$$
almost surely, whence we get the convergence $(2)$ in probability (remember that we replaced $(\overline{X^m})_n$ by its distributional copy).
The convergence in probability might sound bad, but there are at least two advantages:
Only uniform integrability is required.
The approach works for any $(n_m,m\ge 1)$ such that $n_m\to\infty$, $m\to\infty$, i.e. we have
$$
\left| (\overline{X^m})_{n_m} - \mathrm E[X^m]\right|\to 0, m\to \infty,
$$
in probability. The first approach fails (to establish the almost sure convergence) for "small" $n_m$.
Best Answer
The claim in question is a corollary of a standard SLLN for martingale difference sequences (MDS).
SLLN for MDS
The statement of SLLN for MDS is as follows. Let $N_t$ be a martingale difference sequence (MDS) such that $\sum\limits_{t=1}^{\infty} \frac{E[N_t^2]}{t^2} < \infty$, then
$$ \frac{1}{n} \sum_{t=1}^n N_t \rightarrow 0 \;\;a.s. $$
(In this case, the martingale difference sequence $N_t$ is given by differencing the martingale $X_t$: $N_t = X_t - X_{t-1}$. Then summation by parts gives \begin{align*} \sum_{t=1}^n \frac{E[N_t^2]}{t^2} &= \sum_{t=1}^n \frac{E[X_t^2] - E[X_{t-1}^2]}{t^2} \\ &= \frac{E[X_n^2]}{n^2} - \sum_{t = 1}^{n} E[X_{t-1}^2] \left( \frac{1}{t^2} - \frac{1}{(t-1)^2} \right). \end{align*}
The assumption that $E[X_{t}^2] = O(t)$ implies that $$ E[X_{t-1}^2] ( \frac{1}{(t-1)^2} - \frac{1}{t^2} ) = O(\frac{1}{t^2}). $$ Therefore $\sum\limits_{t=1}^{\infty} \frac{E[N_t^2]}{t^2} < \infty$. )
In turn, the SLLN for MDS can be shown via two arguments. Both are standard devices for results of this type, one via the martingale convergence theorem and another via Kolmogorov's martingale maximal inequality.
Via Martingale Convergence Theorem
(The previous answer is a variation of this argument.)
If $\sum\limits_{t=1}^{\infty} \frac{E[N_t^2]}{t^2} < \infty$, the martingale $Y_n = \sum\limits_{t = 1}^n \frac{N_t}{t}$, $n \geq 1$, is bounded in $L^2$, therefore converges almost surely (and in $L^2$). Therefore, by Kronecker's lemma, $$ \frac{1}{n}\sum_{t = 1}^n N_t \stackrel{a.s.}{\rightarrow} 0 $$ as $n \rightarrow \infty$.
Via Maximal Inequality
Consider again the $L^2$-martingale $Y_n = \sum\limits_{t = 1}^n \frac{X_t}{t}$, $n \geq 1$. Let $\sigma^2_t = \frac{E[ X_t^2 ]}{t^2}$.
By the maximal inequality, for all $n > 0$ and for all $\epsilon > 0$, $$ P( \sup_{m \geq n} | S_m - S_n | \geq \epsilon ) \leq \frac{K}{\epsilon^2} \sum_{t \geq n} \sigma^2_t $$ for some constant $K$ independent of $n$. Therefore $$ P( \inf_n \sup_{m \geq n} | S_m - S_n | \geq \epsilon ) = 0 $$ for all $\epsilon > 0$. In other words, the sequence $S_n$, $n \geq 1$, is Cauchy, therefore converges, with probability $1$. Again by Kronecker's lemma, $$ \frac{1}{n}\sum_{t = 1}^n N_t $$ converges to zero as $n \rightarrow \infty$ with probability $1$.