I am new to optimization, and one of the examples the professor gave in class was that we cannot form $\max \lvert\lvert x \rvert\rvert$ subject to $Ax=b$ as a linear programming problem. The reason the professor gave was that it was nonconvex. I was wondering why not being convex implies it cannot be written as a linear programming problem (since we can write $\min \lvert\lvert x \rvert\rvert$ as an LP), and how to prove it is not convex?
[Math] why maximizing the L1 norm of a vector can not be formed as a linear programming problem
convex optimizationlinear programmingnonlinear optimizationoptimization
Related Solutions
As far as I know the answer is negative: the point-wise minimum of affine functions is not convex and thus you cannot cast an an LP.
Conversion of Basis Pursuit to Linear Programming
The Basis Pursuit problem is given by:
$$ \begin{align*} \arg \min_{x} \: & \: {\left\| x \right\|}_{1} \\ \text{subject to} \: & \: A x = b \end{align*} $$
Method A
The term $ {\left\| x \right\|}_{1} $ can written in element wise form:
$$ {\left\| x \right\|}_{1} = \sum_{i = 1}^{n} \left| {x}_{i} \right| $$
Then setting $ \left| {x}_{i} \right| \leq {t}_{i} $ one could write:
$$ \begin{align*} \arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\ \text{subject to} \: & \: A x = b \\ & \: \left| {x}_{i} \right| \leq {t}_{i} \; \forall i \end{align*} $$
Since $ \left| {x}_{i} \right| \leq {t}_{i} \iff {x}_{i} \leq {t}_{i}, \, {x}_{i} \geq -{t}_{i} $ then:
$$ \begin{align*} \arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\ \text{subject to} \: & \: A x = b \\ & \: {x}_{i} \leq {t}_{i} \; \forall i \\ & \: {x}_{i} \geq -{t}_{i} \; \forall i \end{align*} $$
Which can be farther refined:
$$ \begin{align*} \arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\ \text{subject to} \: & \: A x = b \\ & \: I x - I t \preceq \boldsymbol{0} \\ & \: -I x - I t \preceq \boldsymbol{0} \end{align*} $$
Which is a Linear Programming problem.
Method B
Define:
$$ x = u - v, \; {u}_{i} = \max \left\{ {x}_{i}, 0 \right\}, \; {v}_{i} = \max \left\{ -{x}_{i}, 0 \right\} $$
Then the problem becomes:
$$ \begin{align*} \arg \min_{u, v} \: & \: \sum_{i = 1}^{n} {u}_{i} + {v}_{i} \\ \text{subject to} \: & \: A \left( u - v \right) = b \\ & \: u \succeq \boldsymbol{0} \\ & \: v \succeq \boldsymbol{0} \end{align*} $$
Which is also a Linear Programming problem.
MATLAB Implementation
MATLAB Implementation is straight forward using the linprog()
function.
The full code, including validation using CVX, can be found in my StackExchange Mathematics Q1639716 GitHub Repository.
Code Snippet - Method A
function [ vX ] = SolveBasisPursuitLp001( mA, vB )
numRows = size(mA, 1);
numCols = size(mA, 2);
%% vX = [vX; vT]
mAeq = [mA, zeros(numRows, numCols)];
vBeq = vB;
vF = [zeros([numCols, 1]); ones([numCols, 1])];
mA = [eye(numCols), -eye(numCols); -eye(numCols), -eye(numCols)];
vB = zeros(2 * numCols, 1);
sSolverOptions = optimoptions('linprog', 'Display', 'off');
vX = linprog(vF, mA, vB, mAeq, vBeq, [], [], sSolverOptions);
vX = vX(1:numCols);
end
Code Snippet - Method B
function [ vX ] = SolveBasisPursuitLp002( mA, vB )
numRows = size(mA, 1);
numCols = size(mA, 2);
% vU = max(vX, 0);
% vV = max(-vX, 0);
% vX = vU - vX;
% vUV = [vU; vV];
vF = ones([2 * numCols, 1]);
mAeq = [mA, -mA];
vBeq = vB;
vLowerBound = zeros([2 * numCols, 1]);
vUpperBound = inf([2 * numCols, 1]);
sSolverOptions = optimoptions('linprog', 'Display', 'off');
vUV = linprog(vF, [], [], mAeq, vBeq, vLowerBound, vUpperBound, sSolverOptions);
vX = vUV(1:numCols) - vUV(numCols + 1:end);
end
I used the code above in Reconstruction of a Signal from Sub Sampled Spectrum by Compressed Sensing.
Best Answer
An $L_{1}$ maximization problem can have multiple isolated local maximum solutions but that can never happen in an LP. Thus, in general, you can't construct an equivalent (having the same set of local maximum points) LP. Yes, it's possible in some trivial cases, but not in general.
For example, consider
$\max | x | $
$ -1 \leq x \leq 2$
Here, the local maximum points are isolated at $x=-1$ and $x=2$. Since the optimal solutions to an LP form a convex set, you cannot construct an LP that has only these two optimal solutions.
It's relatively easy to show that $L_{1}$ maximization is NP-Hard. Start with a 0-1 ILP feasibility problem:
$Ax=b$
$x \in \left\{ 0,1 \right\}^{n}$
You can relax the constraints to $0 \leq x \leq 1$ but add the objective
$\max \sum_{i=1}^{n} | x_{i}- 1/2 |$.
to get
$\max \sum_{i=1}^{n} | x_{i} - 1/2 |$
subject to
$Ax=b$
$0 \leq x \leq 1$.
If the globally optimal value of this maximization problem is $n/2$, then the original 0-1 ILP feasibility problem has a $0-1$ integer solution. If the globally optimal value is less than $n/2$, then the original 0-1 ILP has no feasible solution.