the answer for your question is yes. Note that, all you need to prove is the following equality
$$
\int_{E} X\ dP
=
\int_{E} \left[\int_{\Omega} X\ dP(\cdot|\mathcal{N})(\omega) \right]
dP|_{\mathcal{N}} \qquad \qquad (1).
$$
for all $E\in\mathcal{N}$.
The simplest way to do this is start from characteristic functions and then use linearity and finally the powerfull approximation theorems of the measure theory.
Let us start with the case $X=1_{A}$ where $A\in \mathcal{F}$. In this case the above equality holds as long as
$$
P(A\cap E)=\int_{\Omega} \left[\int_{\Omega} 1_{A\cap E}\ dP(\cdot|\mathcal{N})(\omega) \right]
dP|_{\mathcal{N}}.
$$
But this is certainly true and we can prove this manipulating the rhs. Indeed
$$
\int_{\Omega} \left[\int_{\Omega} 1_{A\cap E}\ dP(\cdot|\mathcal{N})(\omega) \right]
dP|_{\mathcal{N}}
=
\int_{\Omega} \mathbb{E}(A\cap E|\mathcal{N})
dP|_{\mathcal{N}}
=
\int_{E} \mathbb{E}(A|\mathcal{N})
dP|_{\mathcal{N}},
$$
where we used in the last equality that $E$ is $\mathcal{N}$ measurable.
But the right hand side above is by the properties of conditional expectation equals to $P(A\cap E)$. So the identity (1) holds for any characteristic function and for all $E\in\mathcal{N}$.
By linearity of the integral (1) is certainly true if $X$ is any simple function. Let $X$ be any positive $\mathcal{F}-$measurable function. We know that there is a sequence of simple functions $\varphi_n \uparrow X$, here the up arrow means monotone convergence. Since at this point is clear for us that
$$
\begin{eqnarray}
\int_{E} \varphi_n\ dP
&=&
\int_{E} \left[\int_{\Omega} \varphi_n\ dP(\cdot|\mathcal{N})(\omega) \right] dP|_{\mathcal{N}}
\\
&=& \int_{E} \mathbb{E}(\varphi_n|\mathcal{N}) dP|_{\mathcal{N}},
\end{eqnarray}
$$
we can finish the proof by invoking the Monotone Convergence Theorem.
Edition. I corrected the error pointed out by Didier Piau.
Addition. Theorem. Let $(\Omega,\mathcal{F},\mu)$ be a $\sigma$-finite measure space, $\mathcal{A}$ a sub-$\sigma$-algebra of $\mathcal{B}$, and $\nu=\mu|_{\mathcal{A}}$. If $f\in L^1(\mu)$, there exist $g\in L^1(\nu)$ (thus $g$ is $\mathcal{A}$-measurable) such that
$$
\int_{E} f\ d\mu =\int_{E} g\ d\nu
$$
for all $E\in A$; if $g'$ is another such function then $g=g'$ $\nu$-a.e. (Note that $g$ is the conditional expectation of $f$ on $\mathcal{A}$. (Folland)
$\def\om{\omega}$
$\def\Om{\Omega}$
$\def\bR{\mathbb{R}}$
$\def\si{\sigma}$
$\def\cB{\mathcal{B}}$
$\def\cF{\mathcal{F}}$
My original answer (below) contains an error, since $\Phi$ is not necessarily measurable. In fact, that original proof sketch does not use the fact that $g$ is a measurable stochastic process, only that it is a stochastic process. Right now, I cannot see a way to fix this without adding additional assumptions on $g$. In fact, I do not believe it is true without additional assumptions.
Let $\Om=[0,1]$ with $\cF$ the Lebesgue $\si$-algebra and $P$ Lebesgue measure. Let $D=[0,1]$. Let $G(\om,t)=1_{\{\om=t\}}$ and $\Pi(\om)=\om$. For fixed $t\in D$, we have $G(t)=0$ a.s., so the random variable $G(t)$ is independent of everything, and $h(t):=E[G(t)]=0$ for all $t$. On the other hand, $G(\Pi)=1$ a.s. So $G(\Pi)$ is independent of everything, which gives
$$
E[G(\Pi)\mid\Pi]=E[G(\Pi)]=1.
$$
Original (flawed) answer:
First, let me point out a small confusion in notation. Under normal usage,
$$
E[j(\Pi)] = \int j(\Pi(\omega))(\omega)\,dP(\omega),
$$
without any tildes, which is of course not what you want. One way of carefully notating what you intend is to say that $E[H\mid\Pi]=h(\Pi)$, where $h(\pi)=E[j(\pi)]$.
This is indeed the correct answer. Heuristically, $g$ and $\Pi$ are independent, so in the conditional expectation, you can treat $\Pi$ like a constant and just use the ordinary expectation. For a rigorous formulation of this, you can do the following.
First, we may regard $g$ as a function from $\Omega$ to $\mathbb{R}^D$, the set of functions from $D$ to $\mathbb{R}$, with $g(\omega)(\pi)=g(\pi,\omega)$. With this identification, it follows that $g$ is $\mathcal{G}/\mathcal{B}(\mathbb{R})^D$-measurable. Here $\mathcal{B}(\mathbb{R})^D=\bigotimes_{\pi\in D}\mathcal{B}(\mathbb{R})$ is the product $\sigma$-algebra.
Next, show that since $j(\pi)$ and $\Pi$ are independent for all $\pi\in D$, it follows that $g$ and $\Pi$ are independent. (The $\pi$-$\lambda$ theorem should do the trick here.)
Now define $\Phi:\mathbb{R}^D\times D\to\mathbb{R}$ by $\Phi(f,\pi)=f(\pi)$, so that $H=\Phi(g,\Pi)$, and verify that $\Phi$ is $(\mathcal{B}(\mathbb{R})^D \otimes \mathcal{B}(D))/\mathcal{B}(\mathbb{R})$-measurable.
Finally, use the following.
Theorem. Let $(\Omega,\mathcal{F},P)$ be a probability space and $(S,\mathcal{S})$ a measurable space. Let $X$ be an $S$-valued random variable, $\mathcal{G}\subset\mathcal{F}$ a $\sigma$-algebra, and suppose $X$ and $\mathcal{G}$ are independent. Let $(T,\mathcal{T})$ be a measurable space and $Y$ a $T$-valued random variable. Let $f:S\times T\to\mathbb{R}$ be $(\mathcal{S}\otimes\mathcal{T},\mathcal{B}(\mathbb{R}))$-measurable with $E|f(X,Y)|<\infty$. If $Y$ is $\mathcal{G}/\mathcal{T}$-measurable, then
$$
E[f(X,Y) \mid \mathcal{G}] = \int_S f(x,Y)\,\mu(dx)
\quad\text{a.s.},
$$
where $\mu$ is the distribution of $X$.
This theorem is a special case of Theorem 6.66 in these notes: http://math.swansonsite.com/19s6245notes.pdf.
Best Answer
The difference between the two definitions is that in the first one, we need to do the test that $\mathbb E\left[XY\right]=\mathbb E\left[ZY\right]$ only when $Y$ has the form $\mathbf 1_A$ for all $A\in\mathcal G$ whereas in the second definition, this should be done for all the bounded $\mathcal G$-measurable functions.
All we need is the following fact:
We can use the fact that a bounded $\mathcal G$-measurable function can be approximated in the uniform norm by a linear combination of indicator functions.