Solved – Conditional expectation of a truncated RV derivation, gumbel distribution (logistic difference)

conditional probabilityconditional-expectationgumbel distributionlogisticprobability

I have two random variables which are independent and identically distributed, i.e. $\epsilon_{1}, \epsilon_{0} \overset{\text{iid}}{\sim} \text{Gumbel}(\mu,\beta)$:

$$F(\epsilon) = \exp(-\exp(-\frac{\epsilon-\mu}{\beta})),$$

$$f(\epsilon) = \dfrac{1}{\beta}\exp(-\left(\frac{\epsilon-\mu}{\beta}+\exp(-\frac{\epsilon-\mu}{\beta})\right)).$$

I am trying to calculate two quantities:

  1. $$\mathbb{E}_{\epsilon_{1}}\mathbb{E}_{\epsilon_{0}|\epsilon_{1}}\left[c+\epsilon_{1}|c+\epsilon_{1}>\epsilon_{0}\right]$$
  2. $$\mathbb{E}_{\epsilon_{1}}\mathbb{E}_{\epsilon_{0}|\epsilon_{1}}\left[\epsilon_{0}|c+\epsilon_{1}<\epsilon_{0}\right]$$

I get to a point where I need to do integration on something of the form: $e^{e^{x}}$, which seems to not have an integral in closed form. Can anyone help me out with this? Maybe I have done something wrong.

I feel there definitely should be closed form solution. (EDIT: Even if it is not closed form, but there would be software to quickly evaluate the integral [such as Ei(x)], that would be ok I suppose.)


EDIT:

I think with a change of variables, let

$$y =\exp(-\frac{\epsilon_{1}-\mu}{\beta})$$ and

$$\mu-\beta\ln y =\epsilon_{1}$$

This maps to $[0,\;\infty)$ and $\left[0,\;\exp(-\frac{\epsilon_{0}-c-\mu}{\beta})\right] $ respectively.

$|J|=|\dfrac{d\epsilon}{dy}|=\frac{\beta}{y}$. Then under the change of variable, I have boiled (1) down to…

$$\int_{0}^{\infty}\dfrac{1}{1-e^{-x}}\left(\int_{\mu-\beta\ln x-c}^{\infty}\left[c+\mu-\beta\ln y\right]e^{-y}dy\right)e^{-x}dx$$

There might be an algebra mistake but I still cannot solve this integral…


RELATED QUESTION: Expectation of the Maximum of iid Gumbel Variables

Best Answer

Since the parameters $(\mu,\beta)$ of the Gumbel distribution are location and scale, respectively, the problem simplifies into computing $$\mathbb{E}[\epsilon_1|\epsilon_1+c>\epsilon_0]= \frac{\int_{-\infty}^{+\infty} x F(x+c) f(x) \text{d}x}{\int_{-\infty}^{+\infty} F(x+c) f(x) \text{d}x}$$ where $f$ and $F$ are associated with $\mu=0$, $\beta=1$. The denominator is available in closed form \begin{align*} \int_{-\infty}^{+\infty} F(x+c) f(x) \text{d}x &= \int_{-\infty}^{+\infty} \exp\{-\exp[-x-c]\}\exp\{-x\}\exp\{-\exp[-x]\}\text{d}x\\&\stackrel{a=e^{-c}}{=}\int_{-\infty}^{+\infty} \exp\{-(1+a)\exp[-x]\}\exp\{-x\}\text{d}x\\&=\frac{1}{1+a}\left[ \exp\{-(1+a)e^{-x}\}\right]_{-\infty}^{+\infty}\\ &=\frac{1}{1+a} \end{align*} The numerator involves an exponential integral since (according to WolframAlpha integrator) \begin{align*} \int_{-\infty}^{+\infty} x F(x+c) f(x) \text{d}x &= \int_{-\infty}^{+\infty} x \exp\{-(1+a)\exp[-x]\}\exp\{-x\}\text{d}x\\ &\stackrel{z=e^{-x}}{=} \int_{0}^{+\infty} \log(z) \exp\{-(1+a)z\}\text{d}z\\ &= \frac{-1}{1+a}\left[\text{Ei}(-(1+a) z) -\log(z) e^{-(1+a) z}\right]_{0}^{\infty}\\ &= \frac{\gamma+\log(1+a)}{1+a} \end{align*} Hence $$\mathbb{E}[\epsilon_1|\epsilon_1+c>\epsilon_0]=\gamma+\log(1+e^{-c})$$ This result can easily be checked by simulation, since producing a Gumbel variate amounts to transforming a Uniform (0,1) variate, $U$, as $X=-\log\{-\log(U)\}$. Monte Carlo and theoretical means do agree:

This figure demonstrates the adequation of Monte Carlo and theoretical means when $c$ varies from -2 to 2, with logarithmic axes, based on 10⁵ simulations