Let $(\Omega,\mathcal F,\mu)$ be a probability space. The idea of condition expectation is the following: we have an integrable random variable $X$ and a sub-$\sigma$-algebra $\mathcal G$ of $\mathcal F$. The random variable $X$ is not necessarily measurable with respect to this smaller $\sigma$-algebra. We would like to consider a random variable which is $\mathcal G$-measurable, and close in some sense to $X$.
Assume that $Y$ satisfies conditions 1. and 2. Then
$$X=\color{red}{Y}+\color{blue}{X-Y}.$$
The red random variable is $\mathcal G$-measurable and if $\varphi$ is a bounded $\mathcal G$-measurable function, then $\mathbb E[(\color{blue}{X-Y})\phi]=0$, hence we wrote $X$ as a sum of a $\mathcal G$-measurable random variable plus an other one whose integral over the $\mathcal G$-measurable sets vanish. There is an idea of projection, which can be made more concrete when $X$ belongs to $\mathbb L^2$.
$\def\om{\omega}$
$\def\Om{\Omega}$
$\def\bR{\mathbb{R}}$
$\def\si{\sigma}$
$\def\cB{\mathcal{B}}$
$\def\cF{\mathcal{F}}$
My original answer (below) contains an error, since $\Phi$ is not necessarily measurable. In fact, that original proof sketch does not use the fact that $g$ is a measurable stochastic process, only that it is a stochastic process. Right now, I cannot see a way to fix this without adding additional assumptions on $g$. In fact, I do not believe it is true without additional assumptions.
Let $\Om=[0,1]$ with $\cF$ the Lebesgue $\si$-algebra and $P$ Lebesgue measure. Let $D=[0,1]$. Let $G(\om,t)=1_{\{\om=t\}}$ and $\Pi(\om)=\om$. For fixed $t\in D$, we have $G(t)=0$ a.s., so the random variable $G(t)$ is independent of everything, and $h(t):=E[G(t)]=0$ for all $t$. On the other hand, $G(\Pi)=1$ a.s. So $G(\Pi)$ is independent of everything, which gives
$$
E[G(\Pi)\mid\Pi]=E[G(\Pi)]=1.
$$
Original (flawed) answer:
First, let me point out a small confusion in notation. Under normal usage,
$$
E[j(\Pi)] = \int j(\Pi(\omega))(\omega)\,dP(\omega),
$$
without any tildes, which is of course not what you want. One way of carefully notating what you intend is to say that $E[H\mid\Pi]=h(\Pi)$, where $h(\pi)=E[j(\pi)]$.
This is indeed the correct answer. Heuristically, $g$ and $\Pi$ are independent, so in the conditional expectation, you can treat $\Pi$ like a constant and just use the ordinary expectation. For a rigorous formulation of this, you can do the following.
First, we may regard $g$ as a function from $\Omega$ to $\mathbb{R}^D$, the set of functions from $D$ to $\mathbb{R}$, with $g(\omega)(\pi)=g(\pi,\omega)$. With this identification, it follows that $g$ is $\mathcal{G}/\mathcal{B}(\mathbb{R})^D$-measurable. Here $\mathcal{B}(\mathbb{R})^D=\bigotimes_{\pi\in D}\mathcal{B}(\mathbb{R})$ is the product $\sigma$-algebra.
Next, show that since $j(\pi)$ and $\Pi$ are independent for all $\pi\in D$, it follows that $g$ and $\Pi$ are independent. (The $\pi$-$\lambda$ theorem should do the trick here.)
Now define $\Phi:\mathbb{R}^D\times D\to\mathbb{R}$ by $\Phi(f,\pi)=f(\pi)$, so that $H=\Phi(g,\Pi)$, and verify that $\Phi$ is $(\mathcal{B}(\mathbb{R})^D \otimes \mathcal{B}(D))/\mathcal{B}(\mathbb{R})$-measurable.
Finally, use the following.
Theorem. Let $(\Omega,\mathcal{F},P)$ be a probability space and $(S,\mathcal{S})$ a measurable space. Let $X$ be an $S$-valued random variable, $\mathcal{G}\subset\mathcal{F}$ a $\sigma$-algebra, and suppose $X$ and $\mathcal{G}$ are independent. Let $(T,\mathcal{T})$ be a measurable space and $Y$ a $T$-valued random variable. Let $f:S\times T\to\mathbb{R}$ be $(\mathcal{S}\otimes\mathcal{T},\mathcal{B}(\mathbb{R}))$-measurable with $E|f(X,Y)|<\infty$. If $Y$ is $\mathcal{G}/\mathcal{T}$-measurable, then
$$
E[f(X,Y) \mid \mathcal{G}] = \int_S f(x,Y)\,\mu(dx)
\quad\text{a.s.},
$$
where $\mu$ is the distribution of $X$.
This theorem is a special case of Theorem 6.66 in these notes: http://math.swansonsite.com/19s6245notes.pdf.
Best Answer
If you understood conditional expectation with respect to a sub-$\sigma$-algebra (actually your definition only work for $X\in L^2$, for $L^1$ you should use Radon-Nikodym or limiting argument), then the conditional expectation $\mathbb{E}[X\mid A]$ with respect to an event $A$ is simply as the value of the conditional expectation with respect to the sub-$\sigma$-algebra $\{\varnothing,A,A^c,\Omega\}]$ evaluated at points of $A$ (since $\mathbb{R}$ is Hausdorff every $a\in A$ give the same answer). Note that since $\mathbb{E}[X\mid\mathcal{G}]$ is only defined as an element of $L^1(\mathcal{G})$ (i.e. modulo almost sure equivalence) and not the set of all integrable $\mathscr{G}$-measurable random variable $\mathscr{L}^1(\mathcal{G})$, this is ill-defined for null event $A$, and is well-defined if $A$ is non-null. Conventionally, if $A$ is null we define $\mathbb{E}[X\mid A]=0$ (similar to how we construct $\mathbb{E}[X\mid\mathcal{G}]$) for definiteness). In particular, we have $$ \mathbb{E}[X\mid A]\mathbb{P}(A)=\mathbb{E}[X1_A]. $$ Note that $Y=a$ is an event $Y^{-1}(a)$, so this gives $\mathbb{E}[X\mid Y=a]$.
There are authors who insist on promoting nonsense such as $\displaystyle\mathbb{E}[X\mid Y=a]=\int_{\mathbb{R}} x\frac{f_{X,Y}(x,a)}{f_Y(a)}\,\mathrm{d}x$ even when $\mathbb{P}(Y=y)=0$. If you encounter it in any book, please do the good deed and burn the book along with the author.