[Math] Existence of nonnegative solutions to an underdetermined system of linear equations

linear algebralinear programming

Similar questions have been asked elsewhere, but I think this is sufficiently different to warrant a new post. I have a particular matrix $A$ and would like to know when the system $Ax = 0$ has at least one non-negative solution (other than $\vec{0}$). The problem is underdetermined: in most cases I expect the number of variables to be of the order $m^2$, where $m$ is the number of equations. Furthermore, each column of the matrix sums to $0$ and every equation has a mix of positive and negative coefficients. Is this a sufficient condition for the existence of a non-negative solution?

I have seen the algorithm of http://www.jstor.org/pss/1968384, which can be used to test whether a particular system of equations has a non-negative solution, but have not been able to use it to derive a proof for a general family of matrices.

Thanks.

Best Answer

This scenario is explicitly handled by Gordan's theorem, which states $$ \text{either} \quad \exists x \in \mathbb{R}_+^m\setminus\{0\} \centerdot Ax = 0, \quad\text{or}\quad \exists y\in\mathbb{R}^n\centerdot A^\top y > 0, $$

where $\mathbb{R}_+$ denotes nonnegative reals. (Like Farkas's Lemma, this is a "Theorem of Alternatives"; furthermore, it can be proved from Farkas's lemma.)

A nice way to prove this is, as in Theorem 2.2.6 of Borwein & Lewis's text "Convex Analysis and Nonlinear Optimization", to consider the related optimization problem $$ \inf_y \quad\underbrace{\ln\left(\sum_{i=1}^m \exp(y^\top A \textbf e_i)\right)}_{f(y)}; $$ as stated in that theorem, $f(y)$ is unbounded below iff there exists $y$ so that $A^\top y > 0$. As such, this also gives an unconstrained optimization problem you can plug into your favorite solver to determine which of the two scenarios you are in. Alternatively, you can explicitly solve for either the primal variables $x$ or the dual variables $y$ by considering a similar max entropy problem (i.e. $\inf_y\sum_i \exp(y^\top A\textbf{e}_i)$, which approaches 0 iff the desired $y$ exists) or its dual (you can find this in the above book, as well as papers by the same authors).

Anyway, considering Gordan's theorem, your condition on the columns (which can be written $\textbf{1}^\top A = 0$) has no relationship to the question at hand. In one of your comments you mentioned wanting to generate these matrices. To pick positive examples, fix a satisfying $x$, and construct rows $b_i'$ by first getting some $b_i$ and setting $b_i' := b_i - (x^\top b_i)x / (x^\top x)$; to pick negative examples, by Gordan's theorem, choose some nonzero $y$, and then consider adding to $A$ a column $a_i$, including it if it satisfies $a_i^\top y > 0$.