Sampling – How to Sample and Compute Likelihood from Mollified Uniform Distribution

likelihoodnormal distributionsamplinguniform distribution

I want to draw samples from the mollified Uniform distribution presented in another Cross Validated thread, cf the answer from whuber. What is the best way to do so?

I have tried drawing $\mu \sim U[0, 1]$, and then drawing $x|\mu \sim \mathcal{N}(\mu, \sigma)$, where $\sigma$ is the standard deviation of the mollifier. It seems to work, cf the histogram below for 1,000,000 points with $\sigma=0.1$.

If true, could someone explain why this work, please? And if I get a new point, let say $x_{new} = 1.04$, how can I compute the likelihood of this observation?

Histogram

Best Answer

This works.

Another way to view what you're doing is as $$ \mu \sim U[0, 1] \quad \delta \sim \mathcal N(0, \sigma) \quad X = \mu + \delta .$$ The density of the sum of two variables is the convolution of their densities, which is exactly how @whuber defined the mollified uniform distribution here.

Evaluating the pdf at a single point is a little more complicated. If $X$ is much farther from either $0$ or $1$ than $\sigma$, i.e. $\min \{ \lvert X - 0 \rvert, \lvert X - 1 \rvert \} \gg \sigma$, then for practical purposes you can simply treat the likelihood as either 0 or 1. In your example, though, it seems like your $\sigma$ is fairly large. In that case, your density is the value of the convolution $$ f(x) = \int_0^1 \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{1}{2\sigma^2} (x - \mu)^2} \mathrm d \mu .$$ This integral basically asks, "what's the probability density of seeing $x$ given that my original uniform sample was $\mu$", and marginalizes over all possible values of $\mu$.

One way to compute this integral is to simply notice that, while we defined it for $x$ being the normal variable, it's exactly the same formula to think of us as computing the probability that a normal random variable $\mu \sim \mathcal N(x, \sigma)$ is in the interval $[0, 1]$: $$ f(x) = \Phi\left( \frac{1 - x}{\sigma} \right) - \Phi\left( \frac{-x}{\sigma} \right) .$$ Indeed, we can see that as $\sigma \to 0$, when $x \in (0, 1)$ it'll become $\Phi(\infty) - \Phi(-\infty) = 1 - 0 = 1$, when $x > 1$ it'll be $\Phi(-\infty) - \Phi(-\infty) = 0$, and when $x < 0$ it'll be $\Phi(\infty) - \Phi(\infty) = 0$: a uniform, like we wanted. (The exception is that right at $x = 1$ or $x = 0$ it'll be $\tfrac12$, but this single point doesn't really matter.)

Related Question