# Solved – Does the normal distribution converge to a uniform distribution when the standard deviation grows to infinity

convergencenormal distribution

Does the normal distribution converge to a certain distribution if the standard deviation grows without bounds? it appears to me that the pdf starts looking like a uniform distribution with bounds given by $[-2 \sigma, 2 \sigma]$. Is this true?

#### Best Answer

The other answers already here do a great job of explaining why Gaussian RVs don't converge to anything as the variance increases without bound, but I want to point out a seemingly-uniform property that such a collection of Gaussians does satisfy that I think might be enough for someone to guess that they are becoming uniform, but that turns out to not be strong enough to conclude that. $\newcommand{\len}{\text{len}}$

Consider a collection of random variables $\{X_1,X_2,\dots\}$ where $X_n \sim \mathcal N(0, n^2)$. Let $A = [a_1,a_2]$ be a fixed interval of finite length, and for some $c \in \mathbb R$ define $B = A +c$, i.e. $B$ is $A$ but just shifted over by $c$. For an interval $I = [i_1,i_2]$ define $\len (I) = i_2-i_1$ to be the length of $I$, and note that $\len(A) = \len(B)$.

I'll now prove the following result:

Result: $\vert P(X_n \in A) - P(x_n\in B)\vert \to 0$ as $n \to \infty$.

I call this uniform-like because it says that the distribution of $X_n$ increasingly has two fixed intervals of equal length having equal probability, no matter how far apart they may be. That's definitely a very uniform feature, but as we'll see this doesn't say anything about the actual distribution of the $X_n$ converging to a uniform one.

Pf: note that $X_n = n X_1$ where $X_1 \sim \mathcal N(0, 1)$ so $$P(X_n \in A) = P(a_1 \leq n X_1 \leq a_2) = P\left(\frac{a_1}{n} \leq X_1 \leq \frac{a_2}n\right)$$ $$= \frac{1}{\sqrt{2\pi}}\int_{a_1/n}^{a_2/n} e^{-x^2/2}\,\text dx.$$ I can use the (very rough) bound that $e^{-x^2/2} \leq 1$ to get $$\frac{1}{\sqrt{2\pi}}\int_{a_1/n}^{a_2/n} e^{-x^2/2}\,\text dx \leq \frac{1}{\sqrt{2\pi}}\int_{a_1/n}^{a_2/n} 1\,\text dx$$ $$= \frac{\text{len}(A)}{n\sqrt{2\pi}}.$$

I can do the same thing for $B$ to get $$P(X_n \in B) \leq \frac{\text{len}(B)}{n\sqrt{2\pi}}.$$

Putting these together I have $$\left\vert P(X_n \in A) - P(X_n \in B)\right\vert \leq \frac{\sqrt 2 \text{len}(A) }{n\sqrt{\pi}} \to 0$$ as $n\to\infty$ (I'm using the triangle inequality here).

$\square$

How is this different from $X_n$ converging on a uniform distribution? I just proved that the probabilities given to any two fixed intervals of the same finite length get closer and closer, and intuitively that makes sense that as the densities are "flattening out" from $A$ and $B$'s perspectives.

But in order for $X_n$ to converge on a uniform distribution, I'd need $P(X_n \in I)$ to head towards being proportional to $\text{len}(I)$ for any interval $I$, and that is a very different thing because this needs to apply to any $I$, not just one fixed in advance (and as mentioned elsewhere, this is also not even possible for a distribution with unbounded support).