How do i find all extreme points of the set:
$\displaystyle C=\{x\in\mathbb{R}^n | \sum_{i=1}^n |x_i| \leq 1 \}$?
I guess that $x=0$ is an extreme point, but how can i show it and is it the only one?
convex-analysis
How do i find all extreme points of the set:
$\displaystyle C=\{x\in\mathbb{R}^n | \sum_{i=1}^n |x_i| \leq 1 \}$?
I guess that $x=0$ is an extreme point, but how can i show it and is it the only one?
Generally the procedure is to guess & verify.
In this case your intuition is correct, Since ${\cal A} = \operatorname{co} \{ \pm e_k\}_{k=1}^3 $ (unit vectors) you need only check that these points are indeed extreme points since all other points cannot be extreme points.
To see why no other point can be an extreme point, suppose $x \in \operatorname{co} \{b_k\}_k$ (with the $b_k$ being distinct and finite in number) and $x \notin \{ b_k \}_k $. Then $x = \sum_k \lambda_k b_k$ where $\lambda_k \ge 0$, $\sum_k \lambda_k = 1$. Since $x \notin \{ b_k \}_k $, there must be at least one $\lambda_i \in (0,1)$. Then $x = \lambda_i b_i + (1 -\lambda_i) {1 \over \sum_{k \neq i} \lambda_k }\sum_{k \neq i} \lambda_k b_k$ and since $b_i, {1 \over \sum_{k \neq i} \lambda_k }\sum_{k \neq i} \lambda_k b_k \in \operatorname{co} \{b_k\}_k$ we see that $x$ cannot be an extreme point. Hence the extreme points must be a subset of the $b_k$.
(Aside: The above proof is not quite correct as it is possible that $b_i = {1 \over \sum_{k \neq i} \lambda_k }\sum_{k \neq i} \lambda_k b_k$. If this is the case, then we can write $x = {1 \over \sum_{k \neq i} \lambda_k }\sum_{k \neq i} \lambda_k b_k$ and repeat the process as necessary until $b_i \neq {1 \over \sum_{k \neq i} \lambda_k }\sum_{k \neq i} \lambda_k b_k$.)
To see why the $\pm e_k$ are extreme, we have the following useful result:
Suppose $C$ is convex, $h$ some direction, and that $b \in C$ is the unique solution to the problem $\langle h, b \rangle = \sup_{c \in C} \langle h, c \rangle$. Then $b$ is an extreme point of $C$. (This is straightforward to prove by contradiction.) Note that this is a sufficient, not necessary requirement for $b$ to be extreme.
Proof: Suppose that $b \in C$ is the unique solution to the problem $\langle h, b \rangle = \sup_{c \in C} \langle h, c \rangle$ for some direction $h$, but that $b$ is not an extreme point. Then there are $y,z \in C$ distinct from $b$ and $\alpha \in (0,1)$ such that $b= \alpha y + (1-\alpha)z$. Since $\langle h, b \rangle = \sup_{c \in C} \langle h, c \rangle \ge \alpha \langle h, y \rangle + (1-\alpha) \langle h, z \rangle = \langle h, b \rangle $, we see that $\langle h, y \rangle = \langle h, z \rangle= \langle h, b \rangle$, which contradicts the fact that $b$ is the unique minimiser.
If we choose $h = e_1$ then we see that $\langle e_1, e_1 \rangle = 1 =\sup_{x \in {\cal A}} \langle e_1, x \rangle$, hence $e_1$ is extreme. The other points follow in a similar manner.
Not all such $a$ are going to be in $C$, because there's an additional condition - $y_i$ and $y_j$ must be of opposite sign, otherwise the inner product constraint isn't satisfied (which also means that if there isn't at least one of each, then $C = \{0\}$).
Then we want to prove three things:
That the set $A = \{a \in \mathbb{R}^n : \exists i,j\ (a_i = a_j = y_i = -y_j = 1), (k \neq i, j \implies a_k = 0) \}$ is a subset of $C$.
That $a \in A$ implies that $a$ is an extreme point of $C$.
That if $x$ is an extreme point of $C$, then $x \in A$.
I think 1 is reasonably simple. To prove 2, I'm going to use the following property of an extreme point:
If $x \in \mathbb{R}^n$ such that $a + x, a - x \in C$, then $x = 0$.
Consider $a \in A$ such that $a_i = a_j = y_i = 1$, $y_j = -1$ and $a_k = 0$ for any $k \neq i, j$. Suppose we have $x \in \mathbb{R}^n$ such that $a + x, a - x \in C$. Then by definition:
$a_k \pm x_k \geq 0 \implies \pm x_k \geq 0 \implies x_k = 0$ when $k \neq i,j$.
$\sum (a_k + x_k) = a_i + a_j + x_i + x_j = 2 + x_i + x_j = 2$ but since $x_i, x_j \geq 0$ we must have $x_i, x_j = 0$.
Thus $x = 0$, meaning that $a$ is an extreme point of $C$.
Finally, we want to prove 3, that all extreme points of $C$ are in $A$. I'm going to start by defining $Y^+ := \{i : y_i = +1\}$ and $Y^- := \{i : y_i = -1\}$, i.e. the indices of positive and negative values of $y_i$. I claim that for $p \in C \setminus A$, $p$ must have two non-zero indices in either $Y^+$ or $Y^-$ (to prove, assume that $p$ has at most one non-zero index in each, and you'll quickly prove that those indices must both be 1, so $p \in A$ which is a contradiction). Without loss of generality, assume that they're in $Y^+$, so in other words there are distinct indices $i, j$ such that $p_i, p_j \gt 0$.
Then take $\epsilon = \min(p_i, p_j) / 2$ and consider $x \in \mathbb{R}$ such that $x_i = \epsilon, x_j = -\epsilon, x_k = 0$ for $k \neq i, j$. It should be pretty easy to prove that $p \pm x \in C$, which means that $p$ cannot be an extreme point of $C$, which proves that all of the extreme points lie in $A$.
Best Answer
The boundary of your set is the set $$\partial C =\{x\in \mathbb R^n: \sum_{i=1}^{n}|x_i|=1 \}$$ Thus all unity vectors $e_j$ for $j=1,2,\ldots,n$ of the form $$(e_j)_i^\pm=\begin{cases} \pm1, & \text{if } j=i \\ \phantom{\pm}0, & \text{if }j\neq i \end{cases} $$ or simply $$e_j^+=(0,0,\ldots,0,\underbrace{+1}_{j-\text{th position}},0,\ldots,0,0)$$ and $$e_j^-=(0,0,\ldots,0,\underbrace{-1}_{j-\text{th position}},0,\ldots,0,0)$$ for $j=1,2,\ldots,n$ are extreme points. They do not lie in any open line segment joining two points of $C$. Moreover, the point $x=0$ is not an extreme point since $$0=e_j-e_j$$ for any $j$ (in words: $x=0$ lies on the line segment between the points $e_j$ and $-e_j$ that are in $C$). Finally there are no other extreme points in C, since any point in $C$ can be written as a convex combination of the points $e_j, 1\le j \le n$ so the set of extremal points of $C$ is $$\mathrm{extr}\{C\}=\{\pm e_j, j=1,2,\ldots n\}$$