Conversion of Basis Pursuit to Linear Programming
The Basis Pursuit problem is given by:
$$
\begin{align*}
\arg \min_{x} \: & \: {\left\| x \right\|}_{1} \\
\text{subject to} \: & \: A x = b
\end{align*}
$$
Method A
The term $ {\left\| x \right\|}_{1} $ can written in element wise form:
$$ {\left\| x \right\|}_{1} = \sum_{i = 1}^{n} \left| {x}_{i} \right| $$
Then setting $ \left| {x}_{i} \right| \leq {t}_{i} $ one could write:
$$
\begin{align*}
\arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\
\text{subject to} \: & \: A x = b \\
& \: \left| {x}_{i} \right| \leq {t}_{i} \; \forall i
\end{align*}
$$
Since $ \left| {x}_{i} \right| \leq {t}_{i} \iff {x}_{i} \leq {t}_{i}, \, {x}_{i} \geq -{t}_{i} $ then:
$$
\begin{align*}
\arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\
\text{subject to} \: & \: A x = b \\
& \: {x}_{i} \leq {t}_{i} \; \forall i \\
& \: {x}_{i} \geq -{t}_{i} \; \forall i
\end{align*}
$$
Which can be farther refined:
$$
\begin{align*}
\arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\
\text{subject to} \: & \: A x = b \\
& \: I x - I t \preceq \boldsymbol{0} \\
& \: -I x - I t \preceq \boldsymbol{0}
\end{align*}
$$
Which is a Linear Programming problem.
Method B
Define:
$$ x = u - v, \; {u}_{i} = \max \left\{ {x}_{i}, 0 \right\}, \; {v}_{i} = \max \left\{ -{x}_{i}, 0 \right\} $$
Then the problem becomes:
$$
\begin{align*}
\arg \min_{u, v} \: & \: \sum_{i = 1}^{n} {u}_{i} + {v}_{i} \\
\text{subject to} \: & \: A \left( u - v \right) = b \\
& \: u \succeq \boldsymbol{0} \\
& \: v \succeq \boldsymbol{0}
\end{align*}
$$
Which is also a Linear Programming problem.
MATLAB Implementation
MATLAB Implementation is straight forward using the linprog()
function.
The full code, including validation using CVX, can be found in my StackExchange Mathematics Q1639716 GitHub Repository.
Code Snippet - Method A
function [ vX ] = SolveBasisPursuitLp001( mA, vB )
numRows = size(mA, 1);
numCols = size(mA, 2);
%% vX = [vX; vT]
mAeq = [mA, zeros(numRows, numCols)];
vBeq = vB;
vF = [zeros([numCols, 1]); ones([numCols, 1])];
mA = [eye(numCols), -eye(numCols); -eye(numCols), -eye(numCols)];
vB = zeros(2 * numCols, 1);
sSolverOptions = optimoptions('linprog', 'Display', 'off');
vX = linprog(vF, mA, vB, mAeq, vBeq, [], [], sSolverOptions);
vX = vX(1:numCols);
end
Code Snippet - Method B
function [ vX ] = SolveBasisPursuitLp002( mA, vB )
numRows = size(mA, 1);
numCols = size(mA, 2);
% vU = max(vX, 0);
% vV = max(-vX, 0);
% vX = vU - vX;
% vUV = [vU; vV];
vF = ones([2 * numCols, 1]);
mAeq = [mA, -mA];
vBeq = vB;
vLowerBound = zeros([2 * numCols, 1]);
vUpperBound = inf([2 * numCols, 1]);
sSolverOptions = optimoptions('linprog', 'Display', 'off');
vUV = linprog(vF, [], [], mAeq, vBeq, vLowerBound, vUpperBound, sSolverOptions);
vX = vUV(1:numCols) - vUV(numCols + 1:end);
end
I used the code above in Reconstruction of a Signal from Sub Sampled Spectrum by Compressed Sensing.
To complement Erwin Kalvelagen's answer, we have the following optimization problem in $\mathrm x \in \mathbb R^n$
$$\begin{array}{ll} \text{minimize} & \|\mathrm A \mathrm x - \mathrm b\|_\infty\end{array}$$
where $\mathrm A \in \mathbb R^{m \times n}$ and $\mathrm b \in \mathbb R^m$ are given. Let $\mathrm a_i^\top \in \mathbb R^n$ denote the $i$-th row of $\rm A$.
Introducing decision variable $t \in \mathbb R$ and rewriting in epigraph form, we then obtain the following constrained optimization problem in $\mathrm x \in \mathbb R^n$ and $t \in \mathbb R$
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & \|\mathrm A \mathrm x - \mathrm b\|_\infty \leq t\end{array}$$
which can be rewritten as follows
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & \displaystyle\max_{1 \leq i \leq m} | \mathrm a_i^\top \mathrm x - b_i | \leq t\end{array}$$
which can be rewritten as follows
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & | \mathrm a_1^\top \mathrm x - b_1 | \leq t\\ & | \mathrm a_2^\top \mathrm x - b_2 | \leq t\\ & \qquad \vdots\\ & |\mathrm a_m^\top \mathrm x - b_m | \leq t\end{array}$$
which can be rewritten as follows
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & -t \leq \mathrm a_1^\top \mathrm x - b_1 \leq t\\ & -t \leq \mathrm a_2^\top \mathrm x - b_2 \leq t\\ & \qquad\quad \vdots\\ & -t \leq \mathrm a_m^\top \mathrm x - b_m \leq t\end{array}$$
which can be rewritten as follows
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & -t 1_m \leq \mathrm A \mathrm x - \mathrm b \leq t 1_m\end{array}$$
where the optimal $\rm x$ and the optimal $t$ are the minimizer and the minimum of the original problem, respectively.
Best Answer
Not an answer, but too long for a comment.
To see why the problems are equivalent:
Let $P_1$ be $\min_{x,(t_1,t_2,...)} \{ \sum_i t_i \mid |a_i^Tx -b_i| \le t_i \}$ and $P_2$ be $\min_{x,t} \{ t \mid \sum_i|a_i^Tx -b_i| \le t \}$.
Suppose $(x,(t_1,t_2,...))$ is feasible for $P_1$, then $(x,\sum_i t_i)$ is feasible for $P_2$ and $t = \sum_i t_i$. Hence the value of $P_2$ must be $\le$ the value of $P_1$.
Similarly, if $(x,t)$ is feasible for $P_2$, then $(x, (|a_1^Tx -b_1|, |a_2^Tx-b_2|,...))$ is feasible for $P_1$ and $\sum_i |a_i^Tx -b_i| \le t$. Hence the value of $P_1$ must be $\le $ the value of $P_2$.