It was once asked on CrossValidated whether the normal distribution converges to a uniform distribution when the standard deviation grows to infinity. (The answer was no.) I am curious about a related yet slightly different question. Suppose I have an arbitrarily chosen fixed interval $[A, B)$. One can consider the uniform distribution on that interval, with density $1/(B-A)$ for $A \le x < B$. One can also consider the centered normal distribution $\mathcal{N}(0, \sigma^2)$, with pdf $f(x)$. But we are instead interested in a distribution defined by truncating this distribution to $[A, B)$, with pdf $g(x) \propto f(x)$ for $A \le x < B$ and 0 elsewhere. As $\sigma \rightarrow \infty$, does $g(x)$ converge to the uniform distribution with density $1/(B-A)$?
Does a bounded section of the normal distribution converge to the uniform distribution
convergence-divergencenormal distributionprobability distributions
Related Solutions
You have a stochastic process $\lbrace{X(t):t \geq 0\rbrace}$ with the property that $X(t_2)-X(t_1) \sim N(0,t_2 - t_1)$, which is quite obviously supposed to be Brownian motion (BM). Suppose first that you want to find the distribution function of the running maximum $M(t)=\mathop {\max }\limits_{0 \le s \le t} X(s)$ (the maximum exists, since BM has continuous sample paths). There is a very simple formula for that, namely: $$ {\rm P}(M(t) \le x) = \sqrt {\frac{2}{{\pi t}}} \int_0^x {e^{ - u^2 /(2t)} {\rm d}u}, \;\; x \geq 0. $$ The situation is a little more complicated if you want to find the distribution function of $ M(t_1 ,t_2 ) = \mathop {\max }\limits_{t_1 \le s \le t_2 } X(s)$ (i.e., the maximum of $X$ over the time interval $[t_1,t_2]$). For this purpose, we need to condition on the initial value $X(t_1)$. Since $X(t_1) \sim N(0,t_1)$, it has density function $f(u;t_1) = \frac{1}{{\sqrt {2\pi t_1 } }}e^{ - u^2 /(2t_1 )}$, and by the law of total probability we have $$ {\rm P}(M(t_1 ,t_2 ) \le x) = \int_{ - \infty }^\infty {{\rm P}(M(t_1 ,t_2 ) \le x|X_{t_1 } = u)f(u;t_1) {\rm d}u}. $$ Now, as follows from basic properties of BM, conditioned on $X_{t_1}=u$, $M(t_1 ,t_2 )$ can be replaced by $u + M(0,t_2 - t_1)$, i.e. by $u + M(t_2 - t_1)$ (more precisely, by $u$ plus an independent copy of $M(t_2 - t_1)$, which is independent of $X_{t_1}$). This leads to $$ {\rm P}(M(t_1 ,t_2 ) \le x) = \int_{ - \infty }^\infty {{\rm P}(M(t_2 - t_1 ) \le x - u)f(u;t_1 ){\rm d}u}. $$ Finally, since $M(t_2 - t_1 )$ cannot be negative, we have to integrate only from $-\infty$ to $x$. That is, $$ {\rm P}(M(t_1 ,t_2 ) \le x) = \int_{ - \infty }^x {{\rm P}(M(t_2 - t_1 ) \le x - u)f(u;t_1 ) {\rm d}u}, \;\; x \in {\bf R}. $$ So, we have a double integral with an elementary integrand. Maybe one can simplify it. Also, maybe one can find the result in the literature.
Given a uniform prior and (independent) observations from a Normal distribution then the resulting posterior is a truncated normal distribution. However, in this case the observations are drawn from a truncated prior which makes it more complicated.
First, you can 'ignore' the integral in the denominator since this is just a constant assuring that the posterior is a density. In general $$p(\mu | x) \propto p(x|\mu)p(\mu).$$ As you have derived (note that $1/\sigma$ is a constant and is not considered): $$p(\mu|x) \propto \frac{\phi\left(\frac{x-\mu}{\sigma}\right)}{\Phi\left(\frac{1-\mu}{\sigma}\right) - \Phi\left(\frac{-\mu}{\sigma}\right)}I_{\mu \in [0,1]}.$$ At first glance it looks like it is a truncated normal again, however $\mu$ is now variable instead of $x$, so comparing with the truncated normal density, this is no longer the case.
Best Answer
Yes, even uniformly. Nothing fancy is needed, we just need to formalize the intuitive idea that as $\sigma \to \infty$ the Gaussian density on any fixed interval becomes closer and closer to constant.
On the interval $[A, B]$ the (unnormalized) Gaussian density $f(x) = \exp \left( - \frac{x^2}{\sigma^2} \right)$ is bounded from above by $1$ and bounded from below by $\exp \left( - \frac{\text{max}(A^2, B^2)}{\sigma^2} \right)$ (this slightly awkward expression is needed to handle the case that $A$ is negative and $B$ is positive), and as $\sigma \to \infty$ the lower bound converges to $1$. This gives that the normalized density is bounded from above and below by
$$ \frac{\exp \left( - \frac{\text{max}(A^2, B^2)}{\sigma^2} \right)}{B - A} \le g(x) = \frac{\exp \left( - \frac{x^2}{\sigma^2} \right)}{\int_A^B \exp \left( - \frac{x^2}{\sigma^2} \right) } \le \frac{1}{(B - A) \exp \left( - \frac{\text{max}(A^2, B^2)}{\sigma^2} \right)}$$
so as $\sigma \to \infty$ we see that $g(x)$ converges uniformly to $\frac{1}{B - A}$ as desired.
This argument shows that $A$ and $B$ don't even need to be fixed and can grow slowly (sublinearly) with $\sigma$, e.g. we could have $A = -B, B = O(\sqrt{\sigma})$.