Probability Theory – Convergence of Sequence Involving Maximum of i.i.d. Gaussian Variables

limitsnormal distributionprobability theorysequences-and-series

It's well known that, for a sequence of $n$ i.i.d. standard Gaussian random variables $X_1,\ldots,X_n$, where $X_\max=\max(X_1,\ldots,X_n)$, the following convergence result holds:

$$P\left(\lim_{n\rightarrow\infty}\frac{X_\max}{\sqrt{2\log n}}=1\right)=1$$

or, $\frac{X_\max}{\sqrt{2\log n}}\rightarrow1$ almost surely (for a proof of this convergence, see Example 4.4.1 in Galambos "Asymptotic Theory of Extreme Order Statistics").

I am wondering what happens to the following limit:

$$L=\lim_{n\rightarrow\infty}\left[\left(\frac{X_\max}{\sqrt{2\log n}}-1\right)f(n)\log(n)\right]$$
where $f(n)=o(1)$.

Is $L=0$ or infinite? Does it depend of $f(n)$? I am not sure how to deal with the indeterminate form here…

Best Answer

Let $M_n=\max\{X_k;1\leqslant k\leqslant n\}$ and let us first recall how the first order asymptotics of $M_n$ obtains. For every $x$, $$ P[M_n\leqslant x]=P[X_1\leqslant x]^n, $$ and standard estimates of the gaussian tail show that, when $x\to\infty$, $$ P[X_1\gt x]=1/\theta(x),\qquad \theta(x)\sim x\sqrt{2\pi}\mathrm e^{x^2/2}. $$ Thus, if $\theta(u_n)\ll n$, then $P[M_n\leqslant u_n]\to0$ while, if $\theta(v_n)\gg n$, then $P[M_n\leqslant v_n]\to1$. This holds with $u_n=(1-\varepsilon)\sqrt{2\log n}$ and $v_n=(1+\varepsilon)\sqrt{2\log n}$, for every positive $\varepsilon$, hence $M_n/\sqrt{2\log n}$ converges in probability to $1$.

To go further, assume that $x_n=(1+z_n)\sqrt{2\log n}$, with $z_n\to0$. Then, $$ n^{-1}\theta(x_n)\sim2\sqrt\pi\exp\left( (2z_n+z_n^2)\log n+\tfrac12\log\log n\right). $$ In particular, if $2z_n\log n=t-\tfrac12\log\log n$ for some fixed $t$, then $n^{-1}\theta(x_n)\sim\sqrt{4\pi}\mathrm e^{t}$ hence $P[M_n\leqslant x_n]\to\exp(-\mathrm e^{-t}/\sqrt{4\pi})$. This means that $$ T_n=2\log n\left(\frac{M_n}{\sqrt{2\log n}}-1\right)+\frac12\log\log n+\frac12\log(4\pi) $$ converges in distribution to a random variable $T$ such that, for every $t$, $$ P[T\leqslant t]=\exp(-\mathrm e^{-t}). $$ In particular, $$ U_n=\frac{\log n}{\log\log n}\left(\frac{M_n}{\sqrt{2\log n}}-1\right)\to-\frac14\ \text{in probability.} $$ Edit: For every $n\geqslant2$, consider the random variable $$ V_n=\frac{\log n}{\log\log n}\left(\frac{X_n}{\sqrt{2\log n}}-1\right). $$ The asymptotics on the gaussian tail used above shows that, for every fixed $t$, $$ P[V_n\geqslant t]\sim\frac1{2\sqrt\pi\cdot n\cdot(\log n)^{1/2+2t}}. $$ If $t\lt1/4$, the series $\sum\limits_nP[V_n\geqslant t]$ diverges hence Borel-Cantelli lemma (difficult part) shows that, almost surely $V_n\geqslant t$ for infinitely many $n$. Since $U_n\geqslant V_n$, almost surely $U_n\geqslant t$ for infinitely many $n$.

If $t\gt1/4$, the series $\sum\limits_nP[V_n\geqslant t]$ converges hence Borel-Cantelli lemma (easy part) shows that, almost surely $V_n\leqslant t$ for every $n$ large enough. Thus, $V_n\leqslant t$ for every $n$ with positive probability, hence $U_n\leqslant t$ for every $n$ with positive probability. Since $M_n\to\infty$ almost surely, asymptotically $U_n$ does not depend on $(X_i)_{i\leqslant k}$, for every $k$. Thus, $\limsup U_n$ is an asymptotic random variable and $[\limsup U_n\leqslant t]$ has probability $0$ or $1$.

Finally, $$ \limsup\limits_{n\to\infty}U_n=+\frac14\ \text{almost surely.} $$