Farkas’ lemma proof explanation

convex optimizationlinear algebralinear programmingmatrix-rank

I have been studying the proof of the following variant of Farkas' Lemma:

A system of linear equations $A \mathbf{x} = \mathbf{b}$ in $d$ variables has a solution iff for all $\mathbf{\lambda} \in \mathbb{R}^d, \lambda^T A = \mathbf{0}^T$ implies $\lambda^T \mathbf{b} = 0$.

For the direction $\Rightarrow$ the proof is easy: Suppose that $A\mathbf{x} = \mathbf{b}$ has a solution $\bar{\mathbf{x}}$. Then $\lambda^T A = \mathbf{0}^T \Rightarrow \lambda^TA\bar{\mathbf{x}}=\lambda^{T}\mathbf{b}=0$

For the other direction the proof that the notes give proceeds as follows:

The implication $\lambda^TA= \mathbf{0}^T \Rightarrow \lambda^T\mathbf{b}=0$ means that both matrices $A \in \mathbb{R}^{n\times d}$ and $(A|\mathbf{b}) \in \mathbb{R}^{n\times (d+1)}$ have the same linear dependencies among their rows, therefore the same row rank, which means that they have the same column rank. That means that $\mathbf{b}$ is a linear combination of the columns of $A$ which implies that $A\mathbf{x} = \mathbf{b}$ has a solution.

What I am missing in the second part of the proof is how the starting claim means that both matrices have the same linear dependencies among their rows. Can anyone give an intuitive explanation?

Best Answer

The equation $\lambda^TA= \mathbf{0}^T$ is a statement that a certain linear combination of the rows of $A$ is $0$. Let the rows of $A$ be $A_1,A_2,\dots,A_n,$ and suppose that $\lambda_1A_1+\dots+\lambda_nA_n=\mathbf{0}^T$ Then if $\lambda^T=(\lambda_1,\dots,\lambda_n)$ we have $\lambda^TA= \mathbf{0}^T.$

Under the hypothesis that $\lambda^TA= \mathbf{0}^T \Rightarrow \lambda^T\mathbf{b}=0,\ (A|\mathbf{b})$ will have the same linear dependencies as $A$.