There is likely a proof somewhere on this site but I could not find it. Here I give a quick proof of my comment (since I originally mis-stated the result by forgetting the "lower bounded" restriction):
Let $\{X_i\}_{i=1}^{\infty}$ be a sequence of random variables, not necessarily identically distributed and not necessarily independent, that satisfy:
i) $E[X_i]=m_i$, where $m_i \in \mathbb{R}$ for all $i\in\{1, 2, 3, ...\}$.
ii) There is a constant $\sigma^2_{bound}$ such that $Var(X_i) \leq \sigma^2_{bound}$ for all $i \in \{1, 2, 3, ...\}$.
iii) The variables are pairwise uncorrelated, so $E[(X_i-m_i)(X_j-m_j)]=0$ for all $i \neq j$.
iv) There is a value $b \in \mathbb{R}$ such that, with prob 1, $X_i-m_i\geq b$ for all $i \in \{1, 2, 3, ...\}$.
Define $L_n = \frac{1}{n}\sum_{i=1}^n (X_i-m_i)$. Then $L_n\rightarrow 0$ with prob 1.
Proof: Since the variables are pairwise uncorrelated with bounded variance, we easily find for all $n$:
$$ E[L_n^2] = \frac{1}{n^2}\sum_{i=1}^n \sigma_i^2 \leq \frac{\sigma_{bound}^2}{n} $$
Fix $\epsilon>0$. It follows that:
$$ P[|L_n|>\epsilon] = P[L_n^2 \leq \epsilon^2] \leq \frac{E[L_n^2]}{\epsilon^2} \leq \frac{\sigma_{bound}^2}{n\epsilon^2} $$
Hence:
$$ \sum_{n=1}^{\infty} P[|L_{n^2}|>\epsilon] \leq \sum_{n=1}^{\infty}\frac{\sigma_{bound}^2}{n^2\epsilon^2} < \infty $$
and so $L_{n^2}\rightarrow 0$ with probability 1 by the Borel-Cantelli Lemma. That is, the $L_n$ values converge over the sparse subsequence $n\in\{1, 4, 9, 16, ...\}$.
Since $L_n \geq b$ for all $n$ and $L_{n^2}\rightarrow 0$ with probability 1, it can be shown that $L_n\rightarrow 0$ with probability 1. $\Box$
The lower bounded condition is typically treated by writing $X_n = X_n^+ - X_n^-$ where $X_n^+$ and $X_n^-$ are nonnegative and defined $X_n^+=\max[X_n,0]$, $X_n^-=-\min[X_n,0]$. If $X_n$ and $X_i$ are independent, then $X_n^+$ and $X_i^+$ are also independent. So the lower bounded condition can be removed for the case when variables are independent. However, if $X_n$ and $X_i$ are uncorrelated, that does not mean $X_n^+$ and $X_i^+$ are uncorrelated. So it is not clear to me if the lower-bounded condition can be removed when "independence" is replaced by the weaker condition "pairwise uncorrelated."
Let $A_{12}$ and $A_1$ be any two Borel measurable sets, let also $\mu_1$, $\mu_2$ and $\mu_3$ be the respective probability measures of $X_1$, $X_2$ and $X_3$. Since the latter are independent, then any joint probability measure of those would be the product and so we can do the Lebesgue integration
\begin{align*}
\mathbb{P}(X_1 + X_2 \in A_{12},X_3\in A_3) &= \int \int\int \mathbb{1}(x_1 + x_2 \in A_{12},x_3\in A_3)d\mu_1d\mu_2d\mu_3\\
&=\int \int\int \mathbb{1}(x_1 + x_2 \in A_{12})\mathbb{1}(x_3\in A_3)d\mu_1 d\mu_2 d\mu_3\\
&=\int\int\mathbb{1}(x_1 + x_2 \in A_{12}) d\mu_1 d\mu_2 \int \mathbb{1}(x_3\in A_3)d\mu_3\\
&=\mathbb{P}(X_1+X_2\in A_{12}) \mathbb{P}(X_3\in A_3)
\end{align*}
So yes $X_1+X_2$ and $X_3$ are independent.
Let's try without the Lebesgue integration, we can show that $Y=(X_1, X_2)$ is independent of $X_3$ since for any Borel mesurable sets $A_1$, $A_2$, $A_3$
\begin{align*}
\mathbb{P}(Y\in A_1\times A_2,X_3\in A_3)&=\mathbb{P}(X_1\in A_1, X_2\in A_2, X_3 \in A_3)\\
&=\mathbb{P}(X_1\in A_1, X_2\in A_2) \mathbb{P}(X_3\in A_3)\\
&=\mathbb{P}(Y\in A_1\times A_2) \mathbb{P}(X_3\in A_3)
\end{align*}
Now let $f$ be any deterministic function over the domain of $Y$ and $f^{-1}(Z)=\lbrace y | f(y)\in Z\rbrace$, then for Borel measurable sets $A_0$, $A_3$
\begin{align*}
\mathbb{P}(f(Y)\in A_0,X_3\in A_3)&=\mathbb{P}(Y\in f^{-1}(A_0),X_3\in A_3)\\
&=\mathbb{P}(Y\in f^{-1}(A_0))\mathbb{P}(X_3\in A_3)\\
&=\mathbb{P}(f(Y)\in A_0)\mathbb{P}(X_3\in A_3)\\
\end{align*}
So $f(Y)$ is independent of $X_3$.
Best Answer
Try $$ g_n\colon x\mapsto x\mathbf 1_{\left[-\infty,n\right]}(x). $$