[Math] What’s the maximum entropy probability distribution given bounds [a,b] and mean

entropypr.probabilityst.statistics

What is the continuous probability distribution that maximizes entropy, given only the bounds of the random variable [a,b] and the mean mu of the probability distribution?

For example:

  • if a=0, b=1, and mu=0.5, it should return a U[0,1].

  • if a=10, b=20, and mu=20, it should return Dirac delta at x=20.

  • if a=0, b=1, and mu=0.8 it should return … ?

I imagine the general solution will be based on the Beta distribution with some alpha and beta parameters expressed in terms of a, b, and mu, but I don't know.

Many thanks!

Best Answer

You can maximize the entropy using standard calculus of variations; you need to take into account the constraint that the probability distribution is properly normalized and that the mean is known, using Lagrange multipliers. You then find that the probability distribution is of the form:

$$p(x) = \frac{\alpha e^{\alpha x}} { (e^{\alpha b}-e^{\alpha a})}, x\in [a,b]$$

for $\alpha$ the unique solution to

$$\mu = \frac{ \int_a^b \alpha x e^{\alpha x} dx}{(e^{\alpha b} - e^{\alpha a})} = \frac{b e^{\alpha b} - a e^{\alpha a}}{(e^{\alpha b}-e^{\alpha a})} -\frac{1}{\alpha}$$

Related Question