There is likely a proof somewhere on this site but I could not find it. Here I give a quick proof of my comment (since I originally mis-stated the result by forgetting the "lower bounded" restriction):
Let $\{X_i\}_{i=1}^{\infty}$ be a sequence of random variables, not necessarily identically distributed and not necessarily independent, that satisfy:
i) $E[X_i]=m_i$, where $m_i \in \mathbb{R}$ for all $i\in\{1, 2, 3, ...\}$.
ii) There is a constant $\sigma^2_{bound}$ such that $Var(X_i) \leq \sigma^2_{bound}$ for all $i \in \{1, 2, 3, ...\}$.
iii) The variables are pairwise uncorrelated, so $E[(X_i-m_i)(X_j-m_j)]=0$ for all $i \neq j$.
iv) There is a value $b \in \mathbb{R}$ such that, with prob 1, $X_i-m_i\geq b$ for all $i \in \{1, 2, 3, ...\}$.
Define $L_n = \frac{1}{n}\sum_{i=1}^n (X_i-m_i)$. Then $L_n\rightarrow 0$ with prob 1.
Proof: Since the variables are pairwise uncorrelated with bounded variance, we easily find for all $n$:
$$ E[L_n^2] = \frac{1}{n^2}\sum_{i=1}^n \sigma_i^2 \leq \frac{\sigma_{bound}^2}{n} $$
Fix $\epsilon>0$. It follows that:
$$ P[|L_n|>\epsilon] = P[L_n^2 \leq \epsilon^2] \leq \frac{E[L_n^2]}{\epsilon^2} \leq \frac{\sigma_{bound}^2}{n\epsilon^2} $$
Hence:
$$ \sum_{n=1}^{\infty} P[|L_{n^2}|>\epsilon] \leq \sum_{n=1}^{\infty}\frac{\sigma_{bound}^2}{n^2\epsilon^2} < \infty $$
and so $L_{n^2}\rightarrow 0$ with probability 1 by the Borel-Cantelli Lemma. That is, the $L_n$ values converge over the sparse subsequence $n\in\{1, 4, 9, 16, ...\}$.
Since $L_n \geq b$ for all $n$ and $L_{n^2}\rightarrow 0$ with probability 1, it can be shown that $L_n\rightarrow 0$ with probability 1. $\Box$
The lower bounded condition is typically treated by writing $X_n = X_n^+ - X_n^-$ where $X_n^+$ and $X_n^-$ are nonnegative and defined $X_n^+=\max[X_n,0]$, $X_n^-=-\min[X_n,0]$. If $X_n$ and $X_i$ are independent, then $X_n^+$ and $X_i^+$ are also independent. So the lower bounded condition can be removed for the case when variables are independent. However, if $X_n$ and $X_i$ are uncorrelated, that does not mean $X_n^+$ and $X_i^+$ are uncorrelated. So it is not clear to me if the lower-bounded condition can be removed when "independence" is replaced by the weaker condition "pairwise uncorrelated."
Note that, we require the two random variables to be jointly Gaussian for the result to hold (it is possible that two random variables are Gaussian but not jointly Gaussian). Also, I shall work with density functions instead of characteristic function for simplicity (you can deduce the density from characteristic function, and vice versa). For notation purpose, I denote the random vector by $\left(X_1,X_2\right)$.
The joint probability density function of a bivariate normal distribution is given by
$$f_{X_1,X_2}\left(x_1,x_2\right)=\frac{1}{2\pi\sigma_1\sigma_2\sqrt{1-\rho^2}} \times \exp{\left[-\frac{\left\{\left(\frac{x-\mu_1}{\sigma_1}\right)^2+\left(\frac{y-\mu_2}{\sigma_2}\right)^2-2\rho\left(\frac{x-\mu_1}{\sigma_1}\right)\left(\frac{y-\mu_2}{\sigma_2}\right)\right\}}{2\left(1-\rho^2\right)}\right]}$$
Now, $Cov(X_1,X_2)=0 \Leftrightarrow \rho=0$. Putting $\rho=0$ the joint density function reduces to
\begin{align*}
f_{X_1,X_2}\left(x_1,x_2\right)&=\frac{1}{2\pi\sigma_1\sigma_2} \times \exp{\left[-\frac{\left\{\left(\frac{x-\mu_1}{\sigma_1}\right)^2+\left(\frac{y-\mu_2}{\sigma_2}\right)^2\right\}}{2\left(1-\rho^2\right)}\right]}\\
&=\frac{1}{\sigma_1\sqrt{2\pi}}\exp\left\{-\frac{1}{2}\left(\frac{x-\mu_1}{\sigma_1}\right)^2\right\} \times \frac{1}{\sigma_2\sqrt{2\pi}}\exp\left\{-\frac{1}{2}\left(\frac{y-\mu_2}{\sigma_2}\right)^2\right\}
\end{align*}
Since, we can factor out the joint density function in the form $g(x) \times h(y)$ (which automatically are the respective marginal densities), we conclude that $X_1$ and $X_2$ are independent.
Best Answer
Independence is equivalent to the measure on the product space being the product measure. This is actually just a restatement of the definitions: $\mu_{X_1,...,X_n}(E_1,..,E_n) = P((X_1,...,X_n) \in (E_1,...,E_n) = \prod P(X_i \in E_i) = \prod \mu_i (E_i) = \mu_1 \times ... \times \mu_n (E_1 \times ... \times E_n)$
This is sufficient for Fubini's Theorem (modulo existence of integrals).
edit: Looking back at this, I suppose I should have been clearer since I may have misinterpreted your question. This is unambiguously true for product measures and could be true depending on what you mean by $E_X$ and $E_Y$ more generally. Let's just move down to the absolutely continuous case to make this a bit clearer: In general, we have $f_{X,Y}(x,y) = f_{X|Y}(x|y) f_Y (y)$ (this does generalize beyond the absolutely continuous case, but the notation is cumbersome). If your question does not involve conditioning then find any random variable for which the conditional probability $f_{X|Y} (x|y)$ does not equal the marginal $f_X (x)$. This is how I interpreted your original question, but that is probably not what you meant.
In light of your comment I will give a standard counterexample. Let $B = 1$ or $0$ each with probability $\frac{1}{2}$ and $D = 1$ or $-1$ each with probability $\frac{1}{2}$. Then $A = BD$ is uncorrelated with $B$, but $E A^2 B$ = $\frac{1}{2} \neq E_A E_B A^2 B = E_A A^2 E_B B = \frac{1}{4}$ where I am interpreting $E_B$ to mean the integral with respect to the marginal distribution of $B$.