Solved – Conditional Expected Value of Product of Normal and Log-Normal Distribution

conditional probabilitydistributionsprobabilityself-study

Could someone please provide the answer and steps to solve this expression?

\begin{eqnarray*}
E\left[\left.\left(e^{X}Y+k\right)\right|\left.\left(e^{X}Y+k\right)>0\right]\right.
\end{eqnarray*}
$E$ is the expectation operator.
\begin{eqnarray*}
X\sim N\left(\mu_{X},\sigma_{X}^{2}\right);Y\sim N\left(\mu_{Y},\sigma_{Y}^{2}\right);X\;\text{and }Y\text{ are independent. Also, }k<0
\end{eqnarray*}

KEY MISSING LINK

The above expression depends on proving the below general expression or at least showing that it holds for the special case of Normal and Log-Normal distribution.

$E[UV\mid UV>c] = E[UE[V\mid UV>c]]$

Here, $U,V$ are independent random variables that could be discrete or continuos and follow any probability distribution. $E$ is the expectation operator. $c$ is a constant.

Of course, for our main question we require only the case where one of them is normally distributed and the other is log-normal? Say, $U$ is log-normal and $V$ is normal. Is the above identity true for this special case?

This is posed as a separate question due to its importance: Proof of Simplification of Conditional Expectation of Product of Random Variables

STEPS TRIED

0) JOINT CONDITIONAL DENSITY

I am having difficulty coming up with the conditional joint density to use in the above expectation. The joint density function of just $X$ and $Y$ is straight forward and follows from the standard density function for the bivariate normal case. How would we incorporate the conditional aspect into the joint density function?

1) NORMAL LOG-NORMAL MIXTURE PAPER BY YANG

(Link: http://repec.org/esAUSM04/up.21034.1077779387.pdf)

This paper has the first four central moments without proof (Equation 5 in above paper). If someone could provide these proofs, that might shed more light on the problem above.

The variables in the Yang paper are correlated, which is easy to apply to above; but they also have zero mean in the paper, which in our case does not apply directly, since we have non zero mean.

2)OTHER RELATED LINKS

a) Interesting question about an expectation involving a slightly modified form of the normal log-normal mixture. Though this lacks the conditional aspect, and hence needs some modification before it can be used for the problem above.

https://math.stackexchange.com/questions/1142841/covariance-in-normal-lognormal-nln-mixture

b) Another question on the normal log-normal mixture though this lacks a deeper discussion.

https://math.stackexchange.com/questions/159818/combination-of-a-normal-r-v-with-a-log-normal-one

c) Question on conditional expectation of product of independent random variables. It would be good to know which aspects from this are applicable in our case.

https://math.stackexchange.com/questions/544410/result-and-proof-on-the-conditional-expectation-of-the-product-of-two-random-var

d) Other interesting questions on conditional expectation of independent random variables.

https://math.stackexchange.com/questions/380866/conditional-expectations-for-independent-random-variables?rq=1

https://math.stackexchange.com/questions/55524/rule-with-independent-random-variables-and-conditional-expectations?rq=1

3) TAYLOR SERIES APPROXIMATIONS

Would it be possible to use taylor serious approximations here? I am little confused due to the conditional expectation and the normal log normal mixture? Any pointers on whether this is possible and how to proceed further or whether this is not applicable here would be great.

4) USING STANDARD NORMAL (SEEMS LIKE A DEAD END)

I know that if we can express this sum using the standard normal as below, there is a solution. Please advice on how to do this or other alternatives to solve the above would be helpful as well. This seems to be a dead end as confirmed by experts on this forum. But still keeping here if someone discovers a way to continue using this approach.

\begin{eqnarray*}
W=\left(e^{X}Y+k\right)<=>\mu+\sigma Z\text{ where, }Z\sim N\left(0,1\right)
\end{eqnarray*}

\begin{eqnarray*}
\left[W\sim N\left(\mu,\sigma^{2}\right)\Rightarrow W=\mu+\sigma Z\;;\; W>0\Rightarrow Z>-\mu/\sigma\right]
\end{eqnarray*}
We then need to determine, $\mu\text{ and }\sigma$.

We have for every standard normal distribution, $Z$, and for every
$u,$ $Pr\left[Z>\text-u\right]=Pr\left[Z<u\right]=\mathbf{\Phi}\left(u\right)$.
Here, $\phi$ and $\mathbf{\Phi}$ are the standard normal PDF and
CDF, respectively.
\begin{eqnarray*}
E\left[\left.Z\right|Z>-u\right] & = & \frac{1}{\mathbf{\Phi}\left(u\right)}\left[\int_{-u}^{\infty}t\phi\left(t\right)dt\right]\\
& = & \frac{1}{\mathbf{\Phi}\left(u\right)}\left[\left.-\phi\left(t\right)\right|_{-u}^{\infty}\right]=\frac{\phi\left(u\right)}{\mathbf{\Phi}\left(u\right)}
\end{eqnarray*}
Hence we have,
\begin{eqnarray*}
E\left[\left.Y\right|Y>0\right] & = & \mu+\sigma E\left[\left.Z\right|Z>\left(-\frac{\mu}{\sigma}\right)\right]\\
& = & \mu+\frac{\sigma\phi\left(\mu/\sigma\right)}{\mathbf{\Phi}\left(\mu/\sigma\right)}
\end{eqnarray*}
Setting, $\psi\left(u\right)=u+\phi\left(u\right)/\Phi\left(u\right)$,
\begin{eqnarray*}
E\left[\left.Y\right|Y>0\right]=\sigma\psi\left(\mu/\sigma\right)
\end{eqnarray*}

Best Answer

What is the intended use of the result? That bears on what form of answer is needed, to include whether a stochastic (Monte Carlo) simulation approach might be adequate, And even the bigger picture matter of is this problem necessary to solve, and did someone come up with this problem as a way of solving a higher level problem, and there might be a better approach to the higher level problem which doesn't require this.

Here is a stochastic (Monte Carlo) simulation solution in MATLAB.

a = 1; b = 2; c = 3; d = 4; k = -1; % Made up values for illustrative purpose
n = 1e8; % Number of replications
mux = 10; sigmax = 4; sigmay = 7; % Made up values for illustrative purposes
X = mux + sigmax * randn(n,1); Y = sigmay * randn(n,1); Y1 = a + b + c + d * Y;
success_index = exp(X).*Y1 > 0; % replications in which condition is true
num_success = sum(success_index);
Cond_Sample = exp(X(success_index)) .* Y1(success_index) + k;
disp([num_success mean(Cond_Sample) std(Cond_Sample)/sqrt(num_success)])
1.0e+09 *
0.058475265000000   1.502775087443930   0.057342191058931
Related Question