Entropy – What is Exponential Entropy?

entropyinformation theorylogarithm

Differential entropy (the continuous version of Shannon's entropy measure) is

$$
H = – \int_{-\infty}^\infty f(x) \log f(x) \mathrm{d}x,
$$

where $f(x)$ is a probability density function.

What is the intuition behind computing the exponential entropy of this? Are the properties of the original improved?

$$
\exp(H) = \exp\Bigg[ -\int_{-\infty}^\infty f(x) \log f(x) \mathrm{d}x \Bigg]
$$

I'm guessing that the exponentiation means something, but what?


According to Cover and Thomas (1991), entropy as a measure of uncertainty is:

  • homogeneous
  • not left bounded
  • not sub-additive

therefore, it lacks three of four desirable properties of coherent risk measures. The exponential function attempts to account for these issues but does not achieve this adequately.

Best Answer

I will begin with building intuitions for the discrete case and then discuss the continuous case.

The discrete case

First, consider exponential entropy for the special case of a discrete uniform distribution $U^N$ over $N$ outcomes, i.e. $U^N_i = \frac{1}{N}$. It's easy to show that exponential entropy is equal to the number of outcomes $N$: \begin{align} \exp\left(H\left(U^N\right)\right)& = \exp\left(-\sum_i U^N_i \ln(U^N_i)\right)\\ & = \exp\left(-\sum_i \frac{1}{N} \ln\left(\frac{1}{N}\right)\right)\\ & = \exp\left(N \frac{1}{N} \ln\left(N\right)\right)\\ & = N \end{align} For an arbitrary probability distribution over $M$ outcomes $P^M$, there is then some number $N \leq M$ such that: \begin{align} N = \exp\left(H\left(U^N\right)\right) \leq \exp\left(H\left(P^M\right)\right) \leq \exp\left(H\left(U^{N+1}\right)\right) = N + 1 \end{align} where equal $N = M$ just in case $P^M$ is uniform.

From this inequality, we can interpret exponential entropy as the effective number of outcomes: The probability distribution $P^M$ has about as much uncertainty as a uniform distribution over $\left\lfloor\exp\left(H\left(P^M\right)\right)\right\rfloor$ or $\left\lceil\exp\left(H\left(P^M\right)\right)\right\rceil$ outcomes. Intuitively, a probability distribution with exponential entropy near 2 is about as uncertain as a fair coin flip, and a probability distribution with exponential entropy near one is nearly deterministic.

Exponential entropy is sometimes called perplexity. In this context, the base of the exponent and logarithm are typically written as 2 rather than $e$, but it doesn't matter since $2^{\log_2(x)} = e^{\log_e(x)} = x$.

Predicting a sample

We can use these metrics and intuitions for understanding how well a probability distribution predicts a sample. Call the true data distribution $P$, and the distribution we are measuring $Q$. In a typical use case, $Q$ is a model we have estimated, and now we want to measure how well it fits data that is distributed according to $P$. The cross-entropy of $Q$ relative to $P$ is: \begin{align} H(P, Q) & = -\sum_i P_i \ln Q_i \end{align} In this typical use case, we cannot compute the cross-entropy exactly because we do not know $P$ (otherwise we would use $P$ instead of estimating $Q$). Instead, we gather a dataset $D$, or sample, that is distributed according to $P$, and perform a Monte-carlo estimate of $H(P, Q)$ by averaging across the dataset: \begin{align} H(P, Q) & = -\sum_i P_i \ln Q_i \\ & \approx -\frac{1}{T} \sum_{i\sim P_i} \ln Q_i \\ & = -\frac{1}{T} \sum_{i\in D} \ln Q_i \end{align} where $D$ is just a dataset containing $T$ observations that we are treating as a random sample from the true distribution (Note that $D$ may contain duplicate entries, and may lack some entries entirely).

Note that $H(P, Q) \geq H(P)$, with equality just in case $P=Q$, so lower cross-entropy indicates that $Q$ is closer to $P$. If we exponentiate the cross-entropy to get the perplexity, we see how uncertain the distribution was on average when predicting each observation. A typical application is language modeling: if the perplexity is 100, then, on average, the model was as uncertain in predicting the next word as if it were choosing uniformly among 100 possible next words.

Note that $D$ can be a different sample (still from $P$) from the one that used used to estimate $Q$. In this case, the perplexity is held-out and provides a measure of how well the model generalizes to unseen data from the same distribution it was estimated on, and can be compared to the perplexity on the estimation dataset to assess whether your model has overfit the estimation data.

The continuous case

Shannon obtained the continuous version of entropy in your post by simply replacing the summation sign with an integral rather than performing a rigorous derivation. You can approximate a continuous distribution by binning the random variable and then defining a probability distribution over the bins, with the approximation improving as the number of bins increases. In this sense, you can view the exponential entropy of the approximating distribution in a similar way.

Unfortunately, as the number of bins goes to infinity to make the discrete distribution approach the continuous distribution in the limit, you end up with an inconvenient infinity in the expression. On reflection, this is not so surprising, as the probability of a single real number under a continuous distribution is zero.