Machine Learning – Understanding General Notations in Probability and Statistics

machine learningmathematical-statisticsnotationprobability

I would like to ask about the meaning of such notations in general. For example,
$$
\mathbb{E}_{(\mathbf{u}, \mathbf{\sigma}, \mathbf{Y}_0)\sim N(0,1)\otimes \mu_D\otimes N(0,1)}\left[ e^{\mathbf{\sigma} t}\mathbf{Y}_0\mathbf{u}\right]
$$

where $\mathbf{Y}_0$ is a column vector, $\mu_D$ is a probability measure with a compact support. But the author does explain what is $\mathbf{u}$ here. How to write it in integral?

Is it
$$
\mathbb{E}_{(\mathbf{u}, \mathbf{\sigma}, \mathbf{Y}_0)\sim N(0,1)\otimes \mu_D\otimes N(0,1)}\left[ e^{\mathbf{\sigma} t}\mathbf{Y}_0\mathbf{u}\right]=\int e^{\mathbf{\sigma} t}\mathbf{Y}_0\mathbf{u} d\mu_D?
$$

I think that means some random variables follow these distributions and take expectations with respect to them. But the author also writes
$$
\mathbb{E}_{(\mathbf{u}, \mathbf{\sigma}, \mathbf{Y}_0)\sim N(0,1)\otimes \mu_D\otimes N(0,1)}\left[ e^{\mathbf{\sigma} t}\mathbf{u}^2 \right]
$$

There is no $\mathbf{Y}_0$ but why takes expectation w.r.t. measure of $N(0,1)$?

Best Answer

$$\mathbb{E}_{(\mathbf{u}, \mathbf{\sigma}, \mathbf{Y}_0)\sim N(0,1)\otimes \mu_D\otimes N(0,1)}\left[ e^{\mathbf{\sigma} t}\mathbf{Y}_0\mathbf{u}\right]=\int\int\int e^{\mathbf{\sigma} t}\mathbf{Y}_0\mathbf{u}\,\text d N(0,1)(\mathbf{u})\,\text d \mu_D(\sigma)\,\text d N(0,1)(\mathbf{Y}_0)$$ except that the$$\text d N(0,1)(\mathbf{Y}_0)$$clashes with the fact that $\mathbf{Y}_0$ is a vector.