[Math] Dirac measures are extreme points of unit ball of $C(K)^*$.

functional-analysismeasure-theoryreal-analysis

I've seen proofs that Dirac measures are the extreme points of probability measures, but how do we prove it for general complex Borel measures with total variation norm 1?

I only want to know why they're in the set of all extreme points.

Best Answer

$\newcommand{\norm}[1]{\left\lVert#1\right\rVert}$ Let $(X, \mathcal{A})$ by a measurable space and denote by $\mathcal{M}$ the space of (finite) complex measures on $(X, \mathcal{A})$, endowed with the variation norm. Let $B$ be the closed unit ball in $\mathcal{M}$. I will show the following:

For any $x \in X$ and $a \in \mathbb{C}$ with $|a| =1$, the measure $a \delta_x$ is an extreme point of $B$ (where $\delta_x$ is the Dirac measure centered at $x$).

Proof:

Let $\mu := a \delta_x$ for some $|a| =1$. The idea of the proof is that the variation measure $| \mu |$ is equal to $\delta_x$, so that $| \mu |$ is an extreme point of the convex set $\mathcal{P}$ of probability measures on $(X, \mathcal{A})$.

Suppose that $\mu = s \nu_1 + (1-s) \nu_2$, where $\nu_1, \nu_2 \in B$ and $0 <s <1$. We want to prove that $\nu_1 = \nu_2 = \mu$.

First, we can assume that $\norm{ \nu_1 } = \norm{ \nu_2 } = 1$ (so that $| \nu_1 |$ and $| \nu_2 |$ are probabilities). Otherwise, we would have $\norm{ \mu } = \norm{ s \nu_1 + (1-s) \nu_2 } \leq s \norm{\nu_1} +(1-s) \norm{ \nu_2} < s + (1-s) = 1$, a contradiction.

We have by definition of $\mu$ that: $$ \delta_x = | \mu | = | s \nu_1 +(1-s) \nu_2 | \leq s |\nu_1| +(1-s) |\nu_2|. $$

The measure $\nu := s | \nu_1 | + (1-s) |\nu_2|$ is a probability measure, so the inequality $\delta_x \leq \nu$ is in fact an equality. Indeed, if $A$ is measurable and contains $x$ we must have $$ 1 = \delta_x (A) \leq \nu(A) \leq 1, $$ so $\nu(A)=1$. On the other hand, if $x$ is not in $A$, then $\nu(A) = \nu(X) - \nu(X \setminus A) = 1 - 1 = 0$. This means that $\nu = \delta_x$.

Therefore we have $\delta_x = s | \nu_1 | + (1-s) | \nu_2 |$. Since $| \nu_1 |$ and $| \nu_2 |$ are probabilites and $\delta_x$ is an extreme point of $\mathcal{P}$, we must have $| \nu_1 | = | \nu_2 | = \delta_x$.

The equality $| \nu_1 | = | \nu_2 | = \delta_x$ implies that $\nu_1$ and $\nu_2$ are multiples of $\delta_x$. Indeed, if $A$ is a measurable set that does not contain $x$, then $$ | \nu_i (A) | \leq |\nu_i|(A) = \delta_x (A) = 0, $$ so that $\nu_i(A) =0$. On the other hand, if $A$ contains $x$ then $\nu_i(A) = \nu_i(X) - \nu_i(X \setminus A ) = \nu_i(X)$. Therefore we have $\nu_i = b_i \delta_x$, where $b_i = \nu_i(X)$. Furthermore, we have $| \nu_i | = |b_i \delta_x | = |b_i| \delta_x$, which implies that $|b_i|=1$.

Our assumption that $\mu = s \nu_1 + (1-s) \nu_2$ thus gives $$ a \delta_x = (s b_1 + (1-s) b_2) \delta_x, $$ and therefore $a = s b_1 +(1-s) b_2$, where $0< s <1$ and $a$, $b_1$ and $b_2$ are complex numbers of modulus $1$. This is only possible if $a = b_1 = b_2$, and we conclude that $\nu_1 = \nu_2 = a \delta _x$. $\square$

Related Question