Non-negativity constraints in linear-programming formulation of $L_1$ norm minimization

absolute valuelinear algebralinear programming

My goal is to solve the following minimization problem:
$$
\min_{x}|Ax – b|_1 \tag{1},
$$

where, for simplicity, $A$ is an $n \times n$ matrix, and $x,b\in\mathbb{R}^n$. One can solve this by expressing it as a linear programming problem as follows. Let $y \in \mathbb{R}^n$, and solve instead the following
$$
\min_{x,y} (0\cdot x + 1 \cdot y)
$$

subject to
$$
y \geq Ax – b, \quad y \geq -(Ax-b),
$$

which can be written as a linear constraint as follows
$$
\begin{pmatrix}A & -I\\-A & -I\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} \leq \begin{pmatrix}b\\-b\end{pmatrix}.
$$

My question concerns the non-negativity constraints on $(x,y)$. It is clear that the conditions $y \geq Ax – b, y \geq -(Ax-b)$ imply that $y \geq 0$. My question is: how does one derive that $x \geq 0$? I've seen the argument made that: since changes in $x$ do not modify the cost function value, we can impose any constraint on $x$, and therefore, we can just assert that $x \geq 0$.

I don't fully understand this argument. I could alternatively write the cost function as
$$
\min_x \min_y 1 \cdot y.
$$

Since the constraints on $y$ are functions of $x$, it seems to me that $\min_y y$ is itself also a function of $x$ since fixing a value $x_0$ of $x$ would yield a different lower bound $y \geq |Ax_0 – b|$ for $y$ to satisfy. Therefore, how can I make the claim that $x \geq 0$? It also seems that, if $x \geq 0$ for the linear programming problem, then its also true that the minimizer for Eq. (1) can also be chosen to be $\geq 0$, and its unclear to me what the justification for that is.

Best Answer

It is not necessarily true that $x \ge 0$. Consider the one-dimensional example of minimizing $|x+1|$. That is, $n=1$, $A=(1)$, and $b=-1$. The unique optimal solution is $x=-1$, and the corresponding LP has unique optimal solution $(x,y)=(-1,1)$.

Related Question