There is likely a proof somewhere on this site but I could not find it. Here I give a quick proof of my comment (since I originally mis-stated the result by forgetting the "lower bounded" restriction):
Let $\{X_i\}_{i=1}^{\infty}$ be a sequence of random variables, not necessarily identically distributed and not necessarily independent, that satisfy:
i) $E[X_i]=m_i$, where $m_i \in \mathbb{R}$ for all $i\in\{1, 2, 3, ...\}$.
ii) There is a constant $\sigma^2_{bound}$ such that $Var(X_i) \leq \sigma^2_{bound}$ for all $i \in \{1, 2, 3, ...\}$.
iii) The variables are pairwise uncorrelated, so $E[(X_i-m_i)(X_j-m_j)]=0$ for all $i \neq j$.
iv) There is a value $b \in \mathbb{R}$ such that, with prob 1, $X_i-m_i\geq b$ for all $i \in \{1, 2, 3, ...\}$.
Define $L_n = \frac{1}{n}\sum_{i=1}^n (X_i-m_i)$. Then $L_n\rightarrow 0$ with prob 1.
Proof: Since the variables are pairwise uncorrelated with bounded variance, we easily find for all $n$:
$$ E[L_n^2] = \frac{1}{n^2}\sum_{i=1}^n \sigma_i^2 \leq \frac{\sigma_{bound}^2}{n} $$
Fix $\epsilon>0$. It follows that:
$$ P[|L_n|>\epsilon] = P[L_n^2 \leq \epsilon^2] \leq \frac{E[L_n^2]}{\epsilon^2} \leq \frac{\sigma_{bound}^2}{n\epsilon^2} $$
Hence:
$$ \sum_{n=1}^{\infty} P[|L_{n^2}|>\epsilon] \leq \sum_{n=1}^{\infty}\frac{\sigma_{bound}^2}{n^2\epsilon^2} < \infty $$
and so $L_{n^2}\rightarrow 0$ with probability 1 by the Borel-Cantelli Lemma. That is, the $L_n$ values converge over the sparse subsequence $n\in\{1, 4, 9, 16, ...\}$.
Since $L_n \geq b$ for all $n$ and $L_{n^2}\rightarrow 0$ with probability 1, it can be shown that $L_n\rightarrow 0$ with probability 1. $\Box$
The lower bounded condition is typically treated by writing $X_n = X_n^+ - X_n^-$ where $X_n^+$ and $X_n^-$ are nonnegative and defined $X_n^+=\max[X_n,0]$, $X_n^-=-\min[X_n,0]$. If $X_n$ and $X_i$ are independent, then $X_n^+$ and $X_i^+$ are also independent. So the lower bounded condition can be removed for the case when variables are independent. However, if $X_n$ and $X_i$ are uncorrelated, that does not mean $X_n^+$ and $X_i^+$ are uncorrelated. So it is not clear to me if the lower-bounded condition can be removed when "independence" is replaced by the weaker condition "pairwise uncorrelated."
Best Answer
No, it can not. Suppose, $X_1, ... X_n$ are pairwise independent random variables, such that $\sum_{i = 1}^n X_i = c$ is a constant. Then we can without the loss of generality assume, that $c = 0$ (otherwise we simply replace all $X_i$ with $X_i - \frac{c}{n}$). Now, suppose $t \in \mathbb{R}$ satisfies $P(\max(X_1, ... , X_n) > t) \leq \frac{1}{n}$ (such $t$ exists because for any random variable $Y$ we have $\lim_{t \to \infty}P(y > t) = 0$). Also note, that as $\sum_{i = 1}^n X_i = c$, we can conclude, that $|X_i| > nt$ implies $\exists j \neq i$ such that $|X_j| > t$ (the follows from $|X_i| = |(\sum_{j = 1}^i X_j) + (\sum_{j = i}^n X_j)| \leq (\sum_{j = 1}^i |X_j|) + (\sum_{j = i}^n |X_j|)$) Thus
$$P(|X_i| > nt) = P((|X_i| > nt)\cap(\exists j \neq i|X_j| > t)) \leq \sum_{j < n,j \neq i}P((|X_i| > nt)\cap(|X_j|>t)) = \sum_{j < n,j \neq i}P(|X_i| > nt)P(|X_j|>t) \leq \frac{n-1}{n}P(|X_i| > nt)$$
And as $\frac{n-1}{n} < 1$, the above inequality is only possible when $|X_i| \leq nt$ almost surely. But we know, that if $|Y| \leq a$ almost surely then $Var[Y] \leq a^2$. Thus all $X_i$ have finite varieties. Then we can conclude, that $0 = Var[0] = Var[\sum_{i = 1}^n X_i] = \sum_{i = 1}^n Var[X_i]$, which results in all $X_1, ... , X_n$ being constants (as $\forall i$ we have $Var[X_i] = 0$)