Here is a counter-example to the direction $(\Rightarrow)$:
Let $\Omega = \{1, 2, 3, 4\}$ with $\Sigma = 2^{\Omega}$, and let
$$\mathcal{F} = \sigma(\{1,2\}, \{3,4\}) \qquad\text{and}\qquad \mathcal{G}=\sigma(\{1,4\}, \{2,3\}). $$
Also, let $X : \Omega \to [0, 1]$ be defined by
$$ X(\omega) = c\omega $$
where $c$ is a small positive number so that $0 < c < 4c \leq 1$. Then, with the uniform measure $\mathbf{P}$ on $(\Omega, \Sigma)$ that assigns the probability $\frac{1}{4}$ to each singleton event of $\Sigma$, we get
\begin{align*}
\mathbf{E}[X\mid\mathcal{F}]
&= \mathbf{E}[X \mid \{1,2\}]\mathbf{1}_{\{1,2\}} + \mathbf{E}[X \mid \{3,4\}]\mathbf{1}_{\{3,4\}} \\
&= \frac{3c}{2}\mathbf{1}_{\{1,2\}} + \frac{7c}{2}\mathbf{1}_{\{3,4\}}.
\end{align*}
By a similar argument, we can also check that
\begin{align*}
\mathbf{E}[X\mid\mathcal{G}] = \frac{5c}{2} = \mathbf{E}[\mathbf{E}[X\mid\mathcal{F}]\mid\mathcal{G}].
\end{align*}
On the other hand, by noting that $\mathcal{H} = \sigma(\mathcal{F}, \mathcal{G}) = \Sigma$, we get
\begin{align*}
\mathbf{E}[X\mid\mathcal{H}] = X \neq \mathbf{E}[X\mid\mathcal{F}].
\end{align*}
Suppose $X$ and $Y$ are $\mathbb{R}$-valued random variables such that $X \in L^1(\mathbb{P})$. Then
$$ \mu(A) = \mathbb{P}(Y \in A)
\qquad\text{and}\qquad
\nu(A) = \mathbb{E}[X\mathbf{1}_{\{Y \in A\}}] $$
define finite Borel measures on $\mathbb{R}$. Moreover, it is clear that $\nu \ll \mu$. Hence by the generalized Lebesgue differentiation theorem,
$$ \frac{\mathrm{d}\nu}{\mathrm{d}\mu}(y)
\mathrel{\stackrel{\triangle}=} \lim_{\varepsilon \to 0^+} \frac{\nu(B(y, \varepsilon))}{\mu(B(y, \varepsilon))}
= \lim_{\varepsilon \to 0^+} \mathbb{E}[X \mid |Y - y| < \varepsilon] $$
converges for $\mu$-a.e. $y \in \mathbb{R}$. Moreover, for any $\sigma(Y)$-measurable event $B$, we can find a Borel set $A \subseteq \mathbb{R}$ such that $B = \{Y \in A\}$, hence
\begin{align*}
\mathbb{E}[X \mathbf{1}_B]
= \mathbb{E}[X \mathbf{1}_{\{Y \in A\}}]
&= \int_{\Omega} \mathbf{1}_{A}(x) \, \nu(\mathrm{d}x) \\
&= \int_{\Omega} \frac{\mathrm{d}\nu}{\mathrm{d}\mu}(y) \mathbf{1}_{A}(y) \, \mu(\mathrm{d}y) \\
&= \mathbb{E}\left[ \frac{\mathrm{d}\nu}{\mathrm{d}\mu}(Y) \mathbf{1}_A(Y) \right]
= \mathbb{E}\left[ \frac{\mathrm{d}\nu}{\mathrm{d}\mu}(Y) \mathbf{1}_B \right].
\end{align*}
This shows that
$$ \mathbb{E}[X \mid Y = y] = \frac{\mathrm{d}\nu}{\mathrm{d}\mu}(y)
\qquad \text{for $\mu$-a.e. $y$} $$
and
$$ \mathbb{E}[X \mid Y](\omega) = \frac{\mathrm{d}\nu}{\mathrm{d}\mu}(Y(\omega))
\qquad \text{for $\mathbb{P}$-a.s. $\omega$.} $$
Best Answer
First, in order to simplify the things, we can only consider the case where $X=0$ a.s. Indeed, if we manage to show that for each integrable random variable $Y$, $\mathbb E\left[Y\mathbf{1}_A\right]\geqslant 0$ for all $A\in\mathcal G$ implies that $\mathbb E\left[Y\mid\mathcal G\right]\geqslant 0$ a.s., then we can apply this to $Y-X$ can conclude by linearity of the conditional expectation.
So, let $Y$ be an integrable random variable such that $\mathbb E\left[Y\mathbf{1}_A\right]\geqslant 0$ for all $A\in\mathcal G$. Let $A_n=\{\mathbb E\left[Y\mid\mathcal G\right]\leqslant -1/n\}$ where $n$ is a positive integer. We now by assumption and the fact that $A_n\in\mathcal G$ that $\mathbb E\left[Y\mathbf{1}_{A_n}\right]\geqslant 0$. Moreover, by definition of the conditional expectation, $$0\leqslant\mathbb E\left[Y\mathbf{1}_{A_n}\right]=\mathbb E\left[\mathbb E\left[Y\mid \mathcal G\right]\mathbf{1}_{A_n}\right]\leqslant -n^{-1}\mathbb P(A_n).$$ This forces $A_n$ to have probability $0$. This show that $\bigcup_{n\geqslant 1}A_n$ has probability $0$, and this set is $\{\mathbb E\left[Y\mid\mathcal G\right]\lt 0\}$.