How is $x = -\text{sign}(c_i)e_i$ the solution to minimize $c^T x$ subject to $\|x\|_1 \le 1$

convex optimizationlinear programmingoptimizationproof-explanation

From Convex Optimization by Boyd & Vandenberghe:

$$\begin{array}{ll} \text{minimize} & c^T x\\ \text{subject to} & \|x\|_1 \le 1\end{array}$$

Let $i$ be the index such that $\|c\|_{\infty} = |c_i|$, then the
solution is given by $$x = -\text{sign}(c_i)e_i$$ where $e_i$ is the $i$-th
standard basis vector.

Can someone explain how this is derived? I don't see why the unit vector in the maximum direction of $c$ minimizes this.

Best Answer

Since $\Vert x\Vert_1\leq1$, this means $\sum_j\vert x_j\vert\leq 1$. Now let $x_0=-\mathrm{sign}(c_{i})e_i$ as in your question. For an arbitrary $x$ we then have: $$ |c^Tx|=\left\vert\sum_{j} c_j x_j\right\vert\leq\sum_j|c_j||x_j|\leq|c_i|\sum_j|x_j|=|c_i|\Vert x\Vert_1\leq|c_i| $$ Because $c^Tx_0=-|c_i|$, we have that this is indeed the best possible.