Solved – Estimate the Kullback-Leibler divergence

kullback-leibler

I would like to be sure I am able to compute the KL divergence based on a sample.

Assume the data come from a Gamma distribution with shape=1/.85 and scale=.85.

set.seed(937)

theta <- .85

x <- rgamma(1000, shape=1/theta, scale=theta)

Based on that sample, I would like to compute the KL-divergence from the real underlying distribution to an inverse gaussian distribution with mean 1 (mu=1) and precision 0.832 (lambda=0.832).

I obtain KL = 1.286916.

Can you confirm I compute it well?

Best Answer

Mathematica, using symbolic integration (not an approximation!), reports a value that equals 1.6534640367102553437 to 20 decimal digits.

red = GammaDistribution[20/17, 17/20];
gray = InverseGaussianDistribution[1, 832/1000];
kl[pF_, qF_] := Module[{p, q},
   p[x_] := PDF[pF, x];
   q[x_] := PDF[qF, x];
   Integrate[p[x] (Log[p[x]] - Log[q[x]]), {x, 0, \[Infinity]}]
   ];
kl[red, gray]

In general, using a small Monte-Carlo sample is inadequate for computing these integrals. As we saw in another thread, the value of the KL divergence can be dominated by the integral in short intervals where the base PDF is nonzero and the other PDF is close to zero. A small sample can miss such intervals entirely and it can take a large sample to hit them enough times to obtain an accurate result.

Take a look at the Gamma PDF (red, dashed) and the log PDF (gray) for the Inverse Gaussian near 0:

Gamma pdf

Log of Inverse Gamma pdf

In this case, the Gamma PDF stays high near 0 while the log of the Inverse Gaussian PDF diverges there. Virtually all of the KL divergence is contributed by values in the interval [0, 0.05], which has a probability of 3.2% under the Gamma distribution. The number of elements in a sample of $N = 1000$ Gamma variates which fall in this interval therefore has a Binomial(.032, $N$) distribution. Its standard deviation equals $.032(1 - .032)/\sqrt{N}$ = 0.55%. Thus, as a rough estimate, we can't expect your integral to have a relative accuracy much better than (3.2% - 2*0.55%) / 3.2% = give or take around 40%, because the number of times it samples this critical interval has this amount of error. That accounts for the difference between your result and Mathematica's. To get this error down to 1%--a mere two decimal places of precision--you would need to multiply your sample approximately by $(40/1)^2 = 1600$: that is, you would need a few million values.

Here is a histogram of the natural logarithms of 1000 independent estimates of the KL divergence. Each estimate averages 1000 values randomly obtained from the Gamma distribution. The correct value is shown as the dashed red line.

Histogram

Histogram[Log[Table[
 Mean[Log[PDF[red, #]/PDF[gray, #]] & /@ RandomReal[red, 1000]], {i,1,1000}]]]

Although on the average this Monte-Carlo method is unbiased, most of the time (87% in this simulation) the estimate is too low. To make up for this, on occasion the overestimate can be gross: the largest of these estimates is 18.98. (The wide spread of values shows that the estimate of 1.286916 actually has no reliable decimal digits!) Because of this huge skewness in the distribution, the situation is actually much worse than I previously estimated with the simple binomial thought experiment. The average of these simulations (comprising 1000*1000 values total) is just 1.21, still about 25% less than the true value.

For computing the KL divergence in general, you need to use adaptive quadrature or exact methods.