[Math] A bound for the expectation of the maximum independent random variables

expectationprobability theoryrandom variables

Let $X_1, …, X_n$ be i.i.d centred scalar random variables with finite variance. I am aware of the following two bounds. First note that
$$ \mathbb{E}\max(|X_1|,..,|X_n|) = \int_0^\sqrt{n} \mathbb{P}(\max(|X_1|,…,|X_n|) > t) \ dt + \int_\sqrt{n}^\infty \mathbb{P}(\max(|X_1|,…,|X_n|) > t) \ dt$$
Now using Chebyshev we have
$$ \mathbb{P}(\max (|X_1|,…, |X_n|) > t) \leq \frac{n \mathbb{E}|X_1|^2}{t^2} \ .$$
The expectation changes by noting that
$$\mathbb{E}\max(|X_1|,…, |X_n|)^2 \leq \mathbb{E} \sum_{i=1}^n |X_i|^2 = n \mathbb{E} X_1^2 \ .$$
Now returning to the integral we may bound the first term by $\sqrt{n}$ and for the second term we have
$$ \int_\sqrt{n}^\infty \mathbb{P}(\max(|X_1|,…,|X_n|) > t) \ dt \leq n \mathbb{E} X_1^2 \int_{\sqrt{n}}^\infty \frac{dt}{t^2} = \sqrt{n} \mathbb{E} X_1^2 \ .$$
So we achieve a bound
$$ \mathbb{E}\max(|X_1|,..,|X_n|) \leq (1 + \mathbb{E}X_1^2) \sqrt{n} \ .$$
However in this bound I did not use the independence of the random variables, so I suppose there might be a sharper bound.

Another method I have observed, which could possibly be generalized, is the following. Let $X_1$ and $X_2$ be as above. Now
$$ \max(X_1,X_2) = \frac{X_1 + X_2 + |X_2 – X_1|}{2} \ .$$
Now using the fact that the random variables are centred we have
$$ \mathbb{E} \max(X_1, X_2) = \frac{\mathbb{E} |X_1 – X_2|}{2} \leq \mathbb{E} |X_1| \ .$$
However here we are not able to consider the absolute values of the random variables, and even for the case $n = 3$ we already start to get some cross terms which do not vanish by independence.

Any ideas? I am specifically looking for a sharper bound than $O(\sqrt{n})$ in fact any bound of the order $n^\alpha$ where $\alpha < \frac{1}{2}$ would be perfect.

Best Answer

The bound could have been better. For example, let us just assume that $X_i$'s are identically distributed but not necessarily independent and assume that $ \mathbb{E}(|X_1|^p) < \infty $. Let $ M_n = \max_{ k \leq n } | X_k| $. For any $ \epsilon > 0$, we have \begin{align*} \mathbb{E}(M_n) & = \int_0^{\epsilon n^{1/p} } \mathbb{P}(M_n > t) \ dt + \int_{\epsilon n^{1/p} }^\infty \mathbb{P}(M_n > t) \ dt \\ & \leq \int_0^{\epsilon n^{1/p} } 1 \ dt + \int_{\epsilon n^{1/p} }^{\infty} n \mathbb{P}(X_1 > t) \ dt \\ & = \epsilon n^{1/p} + n^{1/p} \int_{\epsilon n^{1/p} }^{\infty} n^{(p-1)/p} \ \mathbb{P}(X_1 > t) \ dt \\ & \leq \epsilon n^{1/p} + \frac{n^{1/p}}{ \epsilon^{p-1}} \int_{\epsilon n^{1/p} }^{\infty} t^{p-1} \mathbb{P}(X_1 > t) \ dt . \end{align*}

Since $ \mathbb{E}(|X_1|^p) < \infty $, we have that $ \int_{\epsilon n^{1/p} }^{\infty} t^{p-1} \mathbb{P}(X_1 > t) \ dt \to 0 $ as $ n \to \infty $. Thus, dividing by $ n^{1/p} $, we have that \begin{equation*} \frac{ \mathbb{E}(M_n) }{ n^{1/p}}\leq \epsilon + \frac{1}{\epsilon^{p-1}} \int_{\epsilon n^{1/p} }^{\infty} t^{p-1} \mathbb{P}(X_1 > t) \ dt . \end{equation*} Letting $ n \to \infty $, we have \begin{equation*} \limsup_{ n \to \infty} \frac{ \mathbb{E}(M_n) }{ n^{1/p}} \leq \epsilon . \end{equation*} Since $ \epsilon > 0 $ is arbitrary, we have that $ \mathbb{E}(M_n) = o (n^{1/p}) $ as opposed to $ O (n^{1/p}) $ that you get.

Related Question