Does every compact (not necessarily convex) set have extreme points

convex optimizationconvex-analysislinear programmingnon-convex-optimizationnonlinear optimization

Let $A$ be a set in a normed vector space. Call a point $p$ in $A$ extreme if there do not exist $p_0, p_1$ in $A$, distinct from $p$, such that $\lambda p_0+(1-\lambda)p_1=p$, for $\lambda \in [0,1]$. Can you give an example of a compact $A$ which has no extreme points? (I'd prefer an example in finite-dimensional Euclidean space if possible!)

The context for my question is that I was wondering why we need convexity for the statement, "a linear objective is maximized at an extreme point of a compact, convex set." I figured maybe it is because non-convex sets need not have extreme points, but couldn't think of an example.

Thank you! And please let me know if my question is "missing the point" — I've seen it suggested that "extreme points don't make sense for non-convex sets," but I don't understand why.

Best Answer

I don't understand the infinite-dimensional situation so please permit me to assume that we are working in $\mathbb{R}^n$ (equivalently, since we don't need the norm except to define the topology and every norm topology is equivalent in the finite-dimensional case, any finite-dimensional normed vector space). In that case we can say the following:

  1. Every (non-empty!) compact subset of $\mathbb{R}^n$ has an extreme point.
  2. A linear objective is maximized at an extreme point of a compact subset of $\mathbb{R}^n$, as LinAlg says in the comments; this doesn't require convexity.

The following argument will prove both of these. We induct on the dimension $n$. When $n = 0$ the statement is clear ($\mathbb{R}^0$ is a point and hence $K$ is a point and hence an extreme point). Let $K \subset \mathbb{R}^n$ be compact (and non-empty). Consider any nonzero linear functional $f : \mathbb{R}^n \to \mathbb{R}$. By compactness, $f$ attains on $K$ a maximal value $m$. The subset $f^{-1}(m) \cap K$ of points of $K$ at which $p_1$ attains its maximum value is the intersection of $K$ with a hyperplane of dimension $n-1$, so by the inductive hypothesis has an extreme point $p$. (Picture $K$ as a polyhedron and $f^{-1}(m) \cap K$ as a face of it, which is the interesting case; generically $f^{-1}(m) \cap K$ is a point, e.g. if $K$ is an ellipsoid.)

This point is necessarily also an extreme point of $K$, since if $p = \lambda p_0 + (1 - \lambda) p_1$ where $p_0, p_1 \in K$ then $f(p) = \lambda f(p_0) + (1 - \lambda) f(p_1)$; by maximality $f(p_0), f(p_1) \le f(p)$ and hence $\lambda f(p_0) + (1 - \lambda) f(p_1) \le f(p)$ with equality iff $f(p_0) = f(p_1) = f(p) = m$, in which case $p_0, p_1 \in f^{-1}(m) \cap K$. So if $p$ weren't an extreme point of $K$ then it wouldn't be an extreme point of $f^{-1}(m) \cap K$.

So $K$ has an extreme point, and $f$ is maximized at it.

Related Question