BACKGROUND
In studying Bayesian hierarchical models, the truncated normal distribution often arises as a posterior in applications like probit regression and censored data. Here, I outline and summarize a few methods for sampling efficiently from this distribution. For a much more rigorous treatment, see my PDF report. I've also attached Matlab code for practical use of these algorithms. OVERVIEW
We consider the 1-dimensional truncated normal distribution, which has the same PDF as the standard normal distribution but its support is confined to some interval a < x < +Inf (as in the shaded regions in plot below).
We can see that depending on the parameter a's relative position to the mean, the distribution has very different qualitative behavior. In order to sample from this distribution, we take a piecewise approach, using a fast approximate inversion sampler when a is close to the mean, and an asymptotically efficient rejection sampler when a is far enough away from the mean that the numerical approximation is unstable.
INVERSION SAMPLING VIA NUMERICAL APPROXIMATION
Kevin Murphy's textbook provides the following approximate sampler, which is very efficient for truncation bounds a within a few standard deviations of the mean.
In practice, I find this algorithm works quite well unless the truncation bound is very far from the mean (e.g. more than ~5 standard deviations, like the third plot in the above figure). In this regime, the numerical approximations required to compute the Normal CDF and its inverse are disastrous. REJECTION SAMPLING
An alternative rejection sampler can be derived using an Exponential proposal (see report for details). I found this version works fine in practice, with no reason to prefer this alternative. Experiments and theoretical analysis show that both of these rejection sampler schemes have asymptotically perfect efficiency as the truncation bound a approaches infinity.
|
Research >