R – Simulating Constrained Normal Distribution on Bound

normal distributionrsimulationtruncation

I'd like to generate random data from a constrained normal distribution using R.

For example I might want to simulate a variable from a normal distribution with mean=3, sd= 2 and any values larger than 5 are resampled from the same normal distribution.

Thus, for the general function, I could do the following.

rnorm(n=100, mean=3, sd=2)

I then had a few thoughts:

  • Iterate an ifelse function with a loop that repeats until all values are constrained to lie within the bounds.
  • Simulate many more values than required and takes the first n that satisfy the constraint.
  • Avoid vectorised normal variable simulators and instead use a for loop with a do while inside to simulate each observation one at a time and loop where required.

All of the above seem a little bit clunky.

Question

  • What is a simple way to simulate a constrained random normal variable in R from normal with mean = 3,sd = 2 and max = 5?
  • More generally what is a good general way of incorporating constraints into simulated variables in R?

Best Answer

This is called a truncated normal distribution:

http://en.wikipedia.org/wiki/Truncated_normal_distribution

Christian Robert wrote about an approach to doing it for a variety of situations (using different depending on where the truncation points were) here:

Robert, C.P. (1995) "Simulation of truncated normal variables",
Statistics and Computing, Volume 5, Issue 2, June, pp 121-125

Paper available at http://arxiv.org/abs/0907.4010

This discusses a number of different ideas for different truncation points. It's not the only way of approaching these by any means but it has typically pretty good performance. If you want to do a lot of different truncated normals with various truncation points, it would be a reasonable approach. As you noted, msm::tnorm is based on Robert's approach, while truncnorm::truncnorm implements Geweke's (1991) accept-reject sampler; this is related to the approach in Robert's paper. Note that msm::tnorm includes density, cdf, and quantile (inverse cdf) functions in the usual R fashion.

An older reference with an approach is Luc Devroye's book; since it went out of print he's got back the copyright and made it available as a download.

Your particular example is the same as sampling a standard normal truncated at 1 (if $t$ is the truncation point, $(t-\mu)/\sigma = (5-3)/2 = 1$), and then scaling the result (multiply by $\sigma$ and add $\mu$).

In that specific case, Robert suggests that your idea (in the second or third incarnation) is quite reasonable. You get an acceptable value about 84% of the time and so generate about $1.19 n$ normals on average (you can work out bounds so that you generate enough values using a vectorized algorithm say 99.5% of the time, and then once in a while generate the last few less efficiently - even one at a time).

There's also discussion of an implementation in R code here (and in Rccp in another answer to the same question, but the R code there is actually faster). The plain R code there generates 50000 truncated normals in 6 milliseconds, though that particular truncated normal only cuts off the extreme tails, so a more substantive truncation would mean the results were slower. It implements the idea of generating "too many" by calculating how many it should generate to be almost certain to get enough.

If I needed just one particular kind of truncated normal a lot of times, I'd probably look at adapting a version of the ziggurat method, or something similar, to the problem.

In fact it looks like Nicolas Chopin did just that already, so I'm not the only person that has occurred to:

http://arxiv.org/abs/1201.6140

He discusses several other algorithms and compares the time for 3 versions of his algorithm with other algorithms to generate 10^8 random normals for various truncation points.

Perhaps unsurprisingly, his algorithm turns out to be relatively fast.

From the graph in the paper, even the slowest of the algorithms he compares with at the (for them) worst truncation points are generating $10^8$ values in about 3 seconds - which suggests that any of the algorithms discussed there may be acceptable if reasonably well implemented.

Edit: One that I am not certain is mentioned here (but perhaps it's in one of the links) is to transform (via inverse normal cdf) a truncated uniform -- but the uniform can be truncated by simply generating a uniform within the truncation bounds. If the inverse normal cdf is fast this is both fast and easy and works well for a wide range of truncation points.