[Math] n elementary proof of Farkas Lemma that does not use convex analysis or hyperplane separation theorem

convex optimizationlinear algebralinear programmingmatrices

Is there any elementary proof of Farkas lemma which does not use convex analysis and hyperplane separation theorem?

What about special case below:

If the Matrix $A$ is invertible, then there is obviously a unique vector $x$, such that $Ax=b$, is it hard to show that either $x$ is non-negative or there is a vector $y$ such that $y^tA \geq 0$ and $y^t b<0$?

Best Answer

For this special case, there is a very simple solution. Let $e_1,\ldots,e_n$ be the canonical basis of $\mathbb{R}^n$.

If $A$ is invertible then the unique soultion for $Ax=b$ is $A^{-1}b$.

If the vector $A^{-1}b$ has a negative entry then exists $e_i$ such that $e_i^tA^{-1}b<0$.

Now, since $A$ is invertible there exists $y$ such that $y^tA=e_i^t\geq 0$.

Finally, $y^tb=y^tAA^{-1}b=e_i^tA^{-1}b<0$.

Related Question