Expected Value – Computing Expected Value of Normal Lognormal Mixture

expected valuenormal distribution

I need to compute the following covariance:
\begin{equation}
Cov(X, exp(-a X))
\end{equation}

where X follows a normal distribution, $X = N(0.0, \sigma^2)$, and $a$ is a constant scalar.

My findings:
From the definition of covariance I concluded that
\begin{equation}
Cov(X, exp(-a X)) = E[X\ exp(-ax)]
\end{equation}

as X is zero-mean. Hence it boils down to finding the first moment of the normal lognormal mixture.

Upon searching stackexchange and the internet I only found one result which treats this topic (the work by Yang): http://repec.org/esAUSM04/up.21034.1077779387.pdf

I gives the first moments of the mixture $u=e^{1/2 \eta} \epsilon$. The one I am interested in is stated as:
\begin{equation}
E(u) = \frac{1}{2} \rho \sigma e^{\frac{1}{8} \sigma^2}
\end{equation}

I cannot follow the "derivation" of this equation (none is actually given in the paper), but I believe that it is readily applicable to my LNL mixture.

The expected value has a factor which contains the covariance of the random variables considered by Yang and the other contains the exponential of the process $\eta$.
In my case $\epsilon$ does not have unit variance, but variance $\sigma^2$.
Also my $\eta$ is defined as $-2 a X$ to apply the logic of Yang.
Since these two processes are fully negatively correlated, I assume that
the Expected value should be:

\begin{equation}
E[X\ exp(-ax)] = – a \sigma^2 \exp(\frac{1}{2} a^2 \sigma^2)
\end{equation}

In simulations, this expectation matches the monte-carlo derived moment very well, hence I guess that above reasoning is correct.

My questions:

1) Is above reasoning really correct?

2) How did Yang compute the expected value? Understanding the derivation would allow me to directly start from $X exp(-aX)$, instead of fitting my mixture to his shape.

Best Answer

So while adapting the result from the paper to the perfectly correlated case would work it isn't the approach I would suggest, instead I would go with the approach in my comment -- it doesn't make anything simpler by considering the bivariate case. That said if you are curious as to how the result you are interested in is derived then the following would be one way of going about it.

Let $$ \begin{bmatrix} X \\ Y \end{bmatrix} \sim \mathcal{N}\left(\begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 & \rho \sigma \\ \rho \sigma & \sigma^2 \end{bmatrix} \right), $$ then $$ Y \mid X=x \sim \mathcal{N}(\rho\sigma x, \sigma^2(1-\rho^2). $$ So the idea is just going to be to use the MGF, and properties of the conditional expectation, it is a little tedious, but it should go something like $$ \begin{align} \mathbb{E}\left[Xe^{\frac{Y}{2}}\right] &= \mathbb{E}\left[ X\mathbb{E}\left[ e^{\frac{Y}{2}} \; \big| \; X \right]\right] \\ &= e^{\frac{\sigma^2(1-\rho^2)}{2^3}}\mathbb{E}\left[ X e^{\frac{\rho \sigma X}{2}}\right] \\ &=\frac{2}{\rho}e^{\frac{\sigma^2(1-\rho)}{2^3}}\frac{\partial}{\partial \sigma}\mathbb{E}\left[e^{\frac{\rho\sigma X}{2}} \right] \\ &=\frac{2}{\rho}e^{\frac{\sigma^2(1-\rho^2)}{2^3}}\frac{\partial}{\partial \sigma} e^{\frac{\rho^2 \sigma^2}{2^3}} \\ &= \frac{2}{\rho}e^{\frac{\sigma^2(1-\rho^2)}{2^3}} \cdot \frac{2 \rho^2 \sigma}{2^3}e^{\frac{\rho^2 \sigma^2}{2^3}} \\ &=\frac{1}{2} \rho \sigma e^{\frac{\sigma^2}{2^3}} \end{align} $$

Related Question