In an expression where more than one random variables are involved, the symbol $E$ alone does not clarify with respect to which random variable is the expected value "taken". For example
$$E[h(X,Y)] =\text{?} \int_{-\infty}^{\infty} h(x,y) f_X(x)\,dx$$
or
$$E[h(X,Y)] = \text{?} \int_{-\infty}^\infty h(x,y) f_Y(y)\,dy$$
Neither. When many random variables are involved, and there is no subscript in the $E$ symbol, the expected value is taken with respect to their joint distribution:
$$E[h(X,Y)] = \int_{-\infty}^\infty \int_{-\infty}^\infty h(x,y) f_{XY}(x,y) \, dx \, dy$$
When a subscript is present... in some cases it tells us on which variable we should condition. So
$$E_X[h(X,Y)] = E[h(X,Y)\mid X] = \int_{-\infty}^\infty h(x,y) f_{h(X,Y)\mid X}(h(x,y)\mid x)\,dy $$
Here, we "integrate out" the $Y$ variable, and we are left with a function of $X$.
...But in other cases, it tells us which marginal density to use for the "averaging"
$$E_X[h(X,Y)] = \int_{-\infty}^\infty h(x,y) f_{X}(x) \, dx $$
Here, we "average over" the $X$ variable, and we are left with a function of $Y$.
Rather confusing I would say, but who said that scientific notation is totally free of ambiguity or multiple use? You should look how each author defines the use of such symbols.
Even in very advanced mathematics it helps to study simple examples. Part of the art of reading and learning mathematics is to construct such examples for yourself. This answer illustrates the process.
Let's consider the simplest non-trivial possible situation, where the probability space $\Omega = \{\omega_1, \omega_2\}$ has just two elements. We will suppose every subset of $\Omega$ is an event, which enables us to specify any probability distribution in terms of the probabilities $\pi_i = \mathbb P(\{\omega_i\})$ (with $\pi_1+\pi_2=1,$ of course).
To be concrete, let $A:\Omega\to\mathbb R$ be the random variable $A(\omega_1) = 0$ and $A(\omega_2) = 4;$ and let $B$ be the random variable $B(\omega_1) = 2$ and $B(\omega_2) = 0.$ (I chose these numbers to make the arithmetic simple, especially when working with the uniform distribution $\pi_1=\pi_2=1/2.$)
We can present this information about the random variables $A$ and $B$ conveniently as a table of probabilities and values of the random variables:
$$\begin{array}{cc|cc}
\omega & \mathbb P & A & B\\
\hline
\omega_1 &\pi_1 & 0 & 2\\
\omega_2 & \pi_2 & 4 & 0
\end{array}$$
To construct iid variables we need $N$ separate copies of this probability space. The simplest way, with $N=2,$ is to work in the product space $\Omega\times \Omega.$ The definitions tell us all the probabilities and the values of the random variables, so I will just summarize the results here, writing $(\omega,\eta)$ for a generic element of $\Omega\times \Omega.$ $A_1$ is the random variable defined by
$$A_1(\omega,\eta) = A(\omega)$$
while $A_2$ is the random variable
$$A_2(\omega,\eta) = A(\eta),$$
and likewise for the $B_i.$
$$\begin{array}{ccc|rrrr}
\omega & \eta & \mathbb P & A_1 & A_2 & B_1 & B_2\\
\hline
\omega_1 & \omega_1 & \pi_1^2 & 0 & 0 & 2 & 2\\
\omega_1 & \omega_2 & \pi_1\pi_2 & 0 & 4 & 2 & 0\\
\omega_2 & \omega_1 & \pi_2\pi_1 & 4 & 0 & 0 & 2\\
\omega_2 & \omega_2 & \pi_2^2 & 4 & 4 & 0 & 0
\end{array}$$
There's nothing new here: this is the standard product construction for creating iid variables. If you're unconvinced, verify that $(A_1,A_2)$ are independent and $(B_1,B_2)$ are independent.
The question concerns functions of these random variables. The left hand side of the equation in the question expands to
$$E\left[\frac{1}{N}\sum_{i=1}^N A_i\right] = E\left[\frac{1}{2}\left(A_1+A_2\right)\right].$$
It is convenient to denote this $\bar A,$ as in the question. $\bar B$ is similarly defined. Let's expand the preceding table to include these quantities as well as their product, $\bar A \bar B = (\sum A_i \sum B_j)/N^2.$ Looking ahead to the right hand side of the equation, I will also include the products $A_iB_j:$
$$\begin{array}{ccc|rrrr:cccccc:c}
\omega & \eta & \mathbb P & A_1 & A_2 & B_1 & B_2 & A_1B_1 & A_1B_2 & A_2B_1 & A_2B_2 & \bar A & \bar B & \bar A \bar B\\
\hline
\omega_1 & \omega_1 & \pi_1^2 & 0 & 0 & 2 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
\omega_1 & \omega_2 & \pi_1\pi_2 & 0 & 4 & 2 & 0 & 0 & 0 & 8 & 0 & 2 & 1 & 2\\
\omega_2 & \omega_1 & \pi_2\pi_1 & 4 & 0 & 0 & 2 & 0 & 8 & 0 & 0 & 2 & 1 & 2\\
\omega_2 & \omega_2 & \pi_2^2 & 4 & 4 & 0 & 0 & 0 & 0 & 0 & 0 & 4 & 0 & 0
\end{array}$$
There is nothing probabilistic about these calculations. They are carried out separately on each row and are completely deterministic.
The integrals are just sums of values times their probabilities, as always:
$$\begin{aligned}
E[\bar A\bar B] &= \iint_{\Omega\times \Omega} \bar A(\omega,\eta)\bar B(\omega,\eta)\mathbb (\mathbb P\otimes\mathbb P)(\mathrm d(\omega,\eta))\\
& = \pi_1^2(0) + \pi_1\pi_2(2) + \pi_2\pi_1(2) + \pi_2^2(0) = 2\pi_1\pi_2.
\end{aligned}\tag{*}$$
That's the left hand side of the equation in the question. The right hand side is a combination of four expectations. They are worked out in the same way from the table, by multiplying suitable columns by their probabilities and summing:
$$\begin{aligned}
E[A_1B_1] &= \pi_1^2(0) + \pi_2\pi_1(0) + \pi_1\pi_2(0) + \pi_2^2(0) &= 0\\
E[A_1B_2] &= \cdots &= \pi_2\pi_1(8)\\
E[A_2B_1] &= \cdots &= \pi_1\pi_2(8)\\
E[A_2B_2] &= \cdots &= 0.
\end{aligned}$$
Consequently -- still just doing simple algebra,
$$\frac{1}{N^2}\sum_i\sum_j E[A_iB_j] = \frac{1}{4}\left(0 + \pi_2\pi_1(8) + \pi_1\pi_2(8) + 0)\right) = 2\pi_1\pi_2,$$
agreeing with $(*)$ and demonstrating the equation in this case.
Finally, compute $E[\bar A] = 4\pi_2^2 + 4\pi_1\pi_2$ and $E[\bar B] = 2\pi_1\pi_2.$ For most values of $\pi_1$ the product of these expectations does not equal $E[\bar A\bar B] = 2\pi_1\pi_2:$ that is because $A$ and $B$ are not independent. (Indeed, $B = (4-A)/2$ exhibits this dependence explicitly.)
Best Answer
$$\mathbb{E}_{(\mathbf{u}, \mathbf{\sigma}, \mathbf{Y}_0)\sim N(0,1)\otimes \mu_D\otimes N(0,1)}\left[ e^{\mathbf{\sigma} t}\mathbf{Y}_0\mathbf{u}\right]=\int\int\int e^{\mathbf{\sigma} t}\mathbf{Y}_0\mathbf{u}\,\text d N(0,1)(\mathbf{u})\,\text d \mu_D(\sigma)\,\text d N(0,1)(\mathbf{Y}_0)$$ except that the$$\text d N(0,1)(\mathbf{Y}_0)$$clashes with the fact that $\mathbf{Y}_0$ is a vector.