Probability – Estimating Maximum Value of Random Variable

probability

Suppose I have some random variable $X$ which only takes on values over some finite region of the real line, and I want to estimate the maximum value of this random variable. Obviously one crude method is to take many measurements, lets say $X_1$, $X_2$, $\ldots, X_n$ (which we'll say are all iid) and to use
$$X_{max} = \text{max}(X_1, \ldots X_n)$$
as my guess, and as long as $n$ is large enough this should be good enough. However, $X_{max}$ is always less than the actual maximum, and I'm wondering if there's any way to modify $X_{max}$ so it gives a guess (still with some uncertainty) which is centred around the actual maximum value, rather than always a little less than it.

Thanks

Best Answer

When the random variables are uniform on $(0,M)$ for an unknown $M\gt0$, one can check that the maximum $X_n^*$ of an i.i.d. sample of size $n$ is such that $\mathrm P(X_n^*\leqslant x)=(x/M)^n$ for every $x$ in $(0,M)$ hence $\mathrm E(X_n^*)=\frac{n}{n+1}M$. This means that an unbiased estimate of $M$ is $\widehat M_n=\frac{n+1}nX_n^*$.

For other distributions, the result is different. For example, starting from the distribution with density $ax^{a-1}/M^a$ on $(0,M)$ for some known $a\gt0$, one gets $\mathrm P(X_n^*\leqslant x)=(x/M)^{an}$ for every $x$ in $(0,M)$ hence $\mathrm E(X_n^*)=\frac{an}{an+1}M$ and an unbiased estimate of $M$ is $\widehat M_n=\frac{an+1}{an}X_n^*$.

Likewise, if the density is $a(1-x)^{a-1}/M^a$ on $(0,M)$ for some known $a\gt0$, one gets $\mathrm E(X_n^*)=M(1-c_n)$ with $c_n\sim\frac{\Gamma(1+1/a)}{n^{1/a}}$ and an unbiased estimate of $M$ is $\widehat M_n=\frac1{1-c_n}X_n^*$.