Solved – Mean and variance of the maximum of a random number of Uniform variables

expected valueorder-statisticsrandom variableself-studyvariance

Let $U \sim \text{Uniform}(0,1)$, and let $(U_1, U_2, \dots, U_Y)$ denote an iid sample of size $Y$, where the number of drawings $Y$ is itself a random variable with pmf:

$$P(Y=y)=\frac{1}{(e-1)y!} \quad \text{ for } y = 1, 2 \dots$$

Here, $Y$ is also independent of $U$.

My interest is to find the expected value and variance of $M$, where $M = \text{max}(U_i)$


Here is what I have so far:

I started with defining the cdf of M as follows.

$Pr(U1<=M)=\int_0^M{U1dU1}=\frac{M^2}{2}$

Since the cdf is the same for all iid $Ui's$,

$Pr(M<=m)=(\frac{m^2}{2})^y$

Is this correct? If so, how do I proceed from here? Can I treat y as a constant and find the pdf of M by integrating the cdf?

Also, I computed for $E[Y]=\sum_{y=1}^\infty {\frac{y}{(e-1)y!}}=\frac{1}{(e-1)}+1$ but I'm not yet sure how/if this can help.

Best Answer

I have never met your exercice before, but this one is interesting and leads to funny calculation.


First, let me say that your computation are not totally correct. In fact you have computed the law of the random variable M | Y=y. Here are a complete answer.

Let one determine the pdf of the random variable M | Y=y. I emphasise that here, Y is fixed. It is easy to see that \begin{align} \mathrm{p}(M \leq x | Y=y) =& p(U_1, \ldots,U_Y \leq x | Y=y) \\ =& \prod_{y=1}^y p(U_1 \leq x) \\ =& \prod_{y=1}^y \int_0^x 1 \mathrm{d}t \\ =& x^y \\ =& \int_0^x yt^{y-1} \mathrm{d}t \end{align} So the pdf of $M|Y=y$ is the function $t\rightarrow yt^{y-1}$.

To find the pdf of $M$, a simple chain rule is enough. Hence \begin{align} \mathrm{p}(M \leq x) =& \sum_{y=1}^{+\infty} \mathrm{p}(M\leq x, Y=y) \\ =& \sum_{y=1}^{+\infty} \mathrm{p}(M\leq x | Y=y) \mathrm{p}(Y=y) \\ =& \int_0^x \sum_{y=1}^{+\infty} \frac{1}{(e-1)y!} yt^{y-1} \mathrm{d}t \\ =& \int_0^x \sum_{y=1}^{+\infty} \frac{1}{(e-1) (y-1)!} t^{y-1} \mathrm{d}t \\ =& \int_0^x \frac{e^t}{e-1} \mathrm{d}t. \end{align}

As a consequence, the pdf of $M$ is the function $t \rightarrow e^t / (e-1)$. From the pdf of M, it is easy to derive the two first moments of the distribution. Two IPP lead to \begin{align} \mathbb{E}[M] =& \frac{1}{e-1} \\ \mathrm{Var}(M) =& \frac{e^2-3e+1}{(e-1)^2} \end{align}

Here it is. I hope I have answered your question.