[Math] How Can $ {L}_{1} $ Norm Minimization with Linear Equality Constraints (Basis Pursuit / Sparse Representation) Be Formulated as Linear Programming

convex optimizationlinear programmingsparsity

Problem Statement

Show how the $L_1$-sparse reconstruction problem:
$$\min_{x}{\left\lVert x\right\rVert}_1 \quad \text{subject to} \; y=Ax$$
can be reduced to a linear programming problem of form similar to:
$$\min_{u}{b^Tu} \quad \text{subject to} \; Gu=h, Cu\le e.$$
We can assume that $y$ belongs to the range of $A$, typically because $A\in \mathbb{R}^{m\times n}$ is full-rank with $m\lt n$.

What I've Got

I have never worked with linear programming before, and though I think I understand the basics, I have no experience with this kind of reduction. So far, I have tried understanding the problem geometrically: any $x$ in the solution set to $y=Ax$ can be written as the sum of some arbitrary solution $x_0$ and a vector $v$ in the null space of $A$, so the solution set is a shifted copy of the null space. We are trying to expand the $L_1$-ball (or hyper-diamond? I don't know what to call it) until one of the corners hits that shifted subspace. My problem is, I don't know how to express that formally.

The best I can think of is to use a method similar to Converting Sum of Infinity Norm and $ {L}_{1} $ Norm to Linear Programming and let $t_i=\left\lvert x_i\right\rvert, i=1\dots n$ and rewrite the objective as:
$$\min_{t}{1^Tt} \quad \text{subject to} \; x\le t, -x\le t, y=Ax$$
But then $x$ is still floating around in the problem, which doesn't match the desired form (and isn't implementable with MATLAB's linprog function, which I will have to do later). And even if we find such a $t$, recovering the underlying $x$ doesn't seem straightforward to me either.

Am I even moving in the right direction? Any help is appreciated.

Best Answer

Conversion of Basis Pursuit to Linear Programming

The Basis Pursuit problem is given by:

$$ \begin{align*} \arg \min_{x} \: & \: {\left\| x \right\|}_{1} \\ \text{subject to} \: & \: A x = b \end{align*} $$

Method A

The term $ {\left\| x \right\|}_{1} $ can written in element wise form:

$$ {\left\| x \right\|}_{1} = \sum_{i = 1}^{n} \left| {x}_{i} \right| $$

Then setting $ \left| {x}_{i} \right| \leq {t}_{i} $ one could write:

$$ \begin{align*} \arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\ \text{subject to} \: & \: A x = b \\ & \: \left| {x}_{i} \right| \leq {t}_{i} \; \forall i \end{align*} $$

Since $ \left| {x}_{i} \right| \leq {t}_{i} \iff {x}_{i} \leq {t}_{i}, \, {x}_{i} \geq -{t}_{i} $ then:

$$ \begin{align*} \arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\ \text{subject to} \: & \: A x = b \\ & \: {x}_{i} \leq {t}_{i} \; \forall i \\ & \: {x}_{i} \geq -{t}_{i} \; \forall i \end{align*} $$

Which can be farther refined:

$$ \begin{align*} \arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\ \text{subject to} \: & \: A x = b \\ & \: I x - I t \preceq \boldsymbol{0} \\ & \: -I x - I t \preceq \boldsymbol{0} \end{align*} $$

Which is a Linear Programming problem.

Method B

Define:

$$ x = u - v, \; {u}_{i} = \max \left\{ {x}_{i}, 0 \right\}, \; {v}_{i} = \max \left\{ -{x}_{i}, 0 \right\} $$

Then the problem becomes:

$$ \begin{align*} \arg \min_{u, v} \: & \: \sum_{i = 1}^{n} {u}_{i} + {v}_{i} \\ \text{subject to} \: & \: A \left( u - v \right) = b \\ & \: u \succeq \boldsymbol{0} \\ & \: v \succeq \boldsymbol{0} \end{align*} $$

Which is also a Linear Programming problem.

MATLAB Implementation

MATLAB Implementation is straight forward using the linprog() function.
The full code, including validation using CVX, can be found in my StackExchange Mathematics Q1639716 GitHub Repository.

Code Snippet - Method A

function [ vX ] = SolveBasisPursuitLp001( mA, vB )


numRows = size(mA, 1);
numCols = size(mA, 2);

%% vX = [vX; vT]

mAeq = [mA, zeros(numRows, numCols)];
vBeq = vB;

vF = [zeros([numCols, 1]); ones([numCols, 1])];
mA = [eye(numCols), -eye(numCols); -eye(numCols), -eye(numCols)];
vB = zeros(2 * numCols, 1);

sSolverOptions = optimoptions('linprog', 'Display', 'off');
vX = linprog(vF, mA, vB, mAeq, vBeq, [], [], sSolverOptions);
vX = vX(1:numCols);


end

Code Snippet - Method B

function [ vX ] = SolveBasisPursuitLp002( mA, vB )

numRows = size(mA, 1);
numCols = size(mA, 2);

% vU = max(vX, 0);
% vV = max(-vX, 0);
% vX = vU - vX;
% vUV = [vU; vV];

vF = ones([2 * numCols, 1]);

mAeq = [mA, -mA];
vBeq = vB;

vLowerBound = zeros([2 * numCols, 1]);
vUpperBound = inf([2 * numCols, 1]);

sSolverOptions = optimoptions('linprog', 'Display', 'off');

vUV = linprog(vF, [], [], mAeq, vBeq, vLowerBound, vUpperBound, sSolverOptions);

vX = vUV(1:numCols) - vUV(numCols + 1:end);


end

I used the code above in Reconstruction of a Signal from Sub Sampled Spectrum by Compressed Sensing.