Apex angle of a triangle as a random variable

probabilityprobability distributionsprobability theorytriangles

I am not an expert in Probability Theory and I apologize if I make some mistakes in "translating" the following problem into the language of random variables. Any help also to improve the formulation of the problem is very much welcome.

Problem. Given two angles $\alpha, \beta \in (0,2\pi)$, consider the isosceles triangle with the apex angle $\theta = \alpha – \beta$ (or $\pi -(\alpha -\beta)$ if $\alpha – \beta > \pi$) and leg lengths 1. By well known elementary geometry, (twice) the area (without sign) of the triangle is given by $\vert \sin \theta \vert$.

Now consider a probability measure $\mu$ and suppose $\alpha, \beta$ are independent random variables w.r.t. to this probability measure.

Q. Which is the probability measure $\mu$ which maximizes the mean area of the triangle, i.e.
$$
\int_0^{2\pi}\int_0^{2\pi} | \sin(\alpha – \beta)| d\mu(\alpha) d \mu(\beta)?
$$

Thanks to @JeanMarie's comments below, we can get rid of an angle (by sticking one of the sides of the triangle to the x-axis) so the question above can be equivalently formulated as follows:

Q. Which is the probability measure $\mu$ which maximizes the mean area of the triangle, i.e.
$$
\int_0^{2\pi} | \sin(\alpha)| d\mu(\alpha)?
$$

For instance, if $\mu$ is the normalized Lebesgue measure on $(0,2\pi)$ the area of the triangle is
$$
\frac{1}{4\pi}\int_0^{2\pi} \int_0^{2\pi} \vert \sin (\alpha-\beta) \vert d\alpha d \beta = \frac{1}{2}.
$$

Is this the maximum value?

Best Answer

Here is a proof that the uniform distribution over $[0, 2\pi]$ (i.e. the normalized Lebesgue measure) maximizes the integral, based on variational argument. The idea in this proof is based on my other answer for the similar question.

On the space $\mathcal{M}$ of signed Borel measures on $[0, 2\pi]$, define $\langle \cdot, \cdot \rangle$ by

$$ \forall \mu, \nu \in \mathcal{M}, \quad \langle \mu, \nu \rangle = \int_{[0,2\pi]^2} \lvert \sin(x-y)\rvert \, \mu(\mathrm{d}x)\nu(\mathrm{d}y). $$

Obviously $\langle \cdot, \cdot \rangle$ is a symmetric bilinear form on $\mathcal{M}$. Using this, OP's question is rephrased as the problem of finding maximizers of the functional $I(\mu) = \langle \mu, \mu \rangle$ over the set of all probability measures on $[0, 2\pi]$. In this regard, we have the following characterization:

Proposition. Write $\mathcal{M}_1$ for the set of all $\mu \in \mathcal{M}$ satisfying $\mu([0,2\pi]) = 1$. Then, for $\mu \in \mathcal{M}_1$, the followings are equivalent:

  1. $\mu$ maximizes the functional $I$.
  2. $s \mapsto \langle \mu, \delta_s \rangle$ is constant for $s \in [0, 2\pi]$.

Assuming this proposition, we see that the normalized Lebesgue measure $\mu = \frac{1}{2\pi} \operatorname{Leb}|_{[0,2\pi]}$ satisfies the second condition, and so, it maximizes $I$ not only among probability measures but on all of $\mathcal{M}_1$. (On the other hand, I have not proved the uniqueness of the maximizer. I guess that the maximizer is unique, based on the idea that the map $s \mapsto \langle \mu, \delta_s \rangle$ behaves like Fourier transform, and so, it should have inverse transform.)

As for the proof, we resolve an easier part first. This is a typical application of the variational argument.

Proof of $(1)\Rightarrow(2)$. Assume that $\mu \in \mathcal{M}_1$ maximizes the functional $I$. Then for any $s, t \in [0, 2\pi]$, the map $ \epsilon \mapsto I(\mu + \epsilon \delta_s - \epsilon \delta_t) $ attains the maximum at $\epsilon = 0$. Differentiating with respect to $\epsilon$ and plugging $\epsilon = 0$ proves the desired implication. $\square$

So we turn to proving the opposite direction.

Proof of $(2)\Rightarrow(1)$. We begin by citing the following Fourier series.

$$ \lvert \sin x \rvert = \frac{2}{\pi} - \frac{4}{\pi} \sum_{n=1}^{\infty} \frac{\cos (2nx)}{4n^2 - 1}. $$

For instance, see this answer, for the proof. To make use of this, write $\hat{\mu}(\xi) = \int_{[0,2\pi]} e^{i \xi x} \, \mu(\mathrm{d}x)$ for the Fourier transform of $\mu$. Now, since this series converges uniformly, we can plug this formula to $\langle \mu, \nu \rangle$ and interchange the order of the integration and summation. This yields

$$ \langle \mu, \nu \rangle = \frac{2}{\pi} \hat{\mu}(0)\overline{\hat{\nu}(0)} - \frac{4}{\pi} \sum_{n=1}^{\infty} \frac{1}{4n^2 - 1}\operatorname{Re}\left( \hat{\mu}(2n)\overline{\hat{\nu}(2n)} \right). $$

From this, we immediately observe that

$$ \text{if} \quad \mu([0,2\pi]) = 0, \qquad \text{then} \quad \langle \mu, \mu \rangle = -\frac{4}{\pi} \sum_{n=1}^{\infty} \frac{1}{4n^2 - 1}\left|\hat{\mu}(2n)\right|^2 \leq 0. $$

Now we are ready for the final blow. Let $\mu \in \mathcal{M}_1$ be such that $\langle \mu, \delta_s \rangle = \int_{[0,2\pi]} \lvert \sin(x-s)\rvert \, \mu(\mathrm{d}x)$ does not depend on $s \in [0,2\pi]$. If $c$ denotes this constant value, then $ \langle \mu, \nu \rangle = c $ holds for any $\nu \in \mathcal{M}_1$. So,

$$ \forall \nu \in \mathcal{M}_1, \qquad I(\nu) = I(\mu) + 2\underbrace{\langle \mu, \nu-\mu \rangle}_{c-c = 0} + \underbrace{I(\nu - \mu)}_{\leq 0} \leq I(\mu). $$

Therefore $\mu$ maximizes $I$ over $\mathcal{M}_1$ as desired. $\square$

Related Question