Solved – importance sampling from posterior distribution in R

bayesianimportance-samplingmonte carloposteriorsimulation

Today I read that Importance Sampling can be used to draw posterior distribution samples just like Rejection Sampling. However, my understanding of Importance Sampling is that its main purpose is to just compute expectations (ie. integrals) that would otherwise be hard to compute. How can Importance Sampling be used to draw posterior distribution samples from a Bayesian Model?

I only know the posterior positive ratio up to some constant, i.e., a constant term in the normalisation.

Best Answer

The boundary between estimating expectations and producing simulations is rather vague in the sense that, once given an importance sampling sample $$(x_1,\omega_1),\ldots,(x_T,\omega_T)\qquad\omega_t=f(x_t)/g(x_t)$$ estimating $\mathbb E_f[h(X)]$ by $$\frac{1}{T}\,\sum_{t=1}^T \omega_t h(x_t)$$ and estimating the cdf $F(\cdot)$ by$$\hat F(x)=\frac{1}{T}\,\sum_{t=1}^T\omega_t \Bbb I_{x\le x_t}$$ is of the same nature. Simulating from $\hat F$ is easily done by inversion and is also the concept at the basis of weighted bootstrap.

In the case where the density $f$ is not properly normalised, as in most Bayesian settings, renormalising the $\omega_t$'s by their sum is also a converging approximation.

$\qquad\qquad$enter image description here

The field of particle filters and sequential Monte Carlo (SMC) is taking advantage of this principle to handle sequential targets and state-space models. As in the illustrations above and below:

$\qquad\qquad$enter image description here

Related Question