[Math] Pseudoinverse of columns of a matrix

linear algebra

First, some background:

I'm working on an implementation in C# of Lemke's algorithm (for solving linear complementarity problems) based on this Matlab implementation: http://ftp.cs.wisc.edu/math-prog/matlab/lemke.m

The implementation calculates $d = B / b_e$ in a tight inner loop. (Here $/$ is the Matlab "backslash" command, which calculates the Moore-Penrose pseudoinverse least squares "solution"). So as the implementation has it I'd have to decompose a matrix every iteration to compute the answer, which would be way to slow for practical use.

However, looking at it closer, the $B$ matrix is actually a permutation of a subset of the columns of a larger matrix which doesn't change from iteration to iteration, which has the form (in block notation) $[M, -I, c]$, where $M$ is a supplied square matrix, $I$ is the identity matrix of the same size as $M$, and $c$ is a calculable column vector. I'll call this larger matrix $B_s$, and $B_s P = B$, where $P$ is like a permutation matrix except that it actually excludes some columns (so that the result, $B$, is square).

As for $b_e$, it's actually just a column from $B_s$.

Which means that I can effectively pre-calculate all possible values for $d$ before I start the loop. If I let $B_s^{+}$ represent the Moore-Penrose Pseudoinverse of $B_s$ (likewise with $P$), I get:

$d = P^+ B_s^+ b_e$, where $b_e = $ column $e$ from $B_s$.

Now the question

Given a matrix $B_s$ and it's Moore-Penrose pseudoinverse $B_s^+$, is there any special significance to calculating $B_s^+b_e$ where $b_e$ is a column of $B_s$?

If $B_s$ were square and of full rank, the pseudoinverse would just be the normal matrix inverse. In which case:

  1. $b_e = B_s e$ (where $e$ is a column vector with only one entry set to $1$ and all others set to $0$).
  2. $B_s^{+} b_e = B_s^{-1} (B_s e) = (B_s^{-1} B_s) e = (I) e = e$

However, I don't think that's true for the general pseudoinverse of rectangular, possibly rank deficient, matrices.

Best Answer

I'm not aware of anything about $B^{+}\_{s}b_{c}$ that would be helpful here. It's not a column of the identity, because $B^{+}B$ isn't generally $I$. You can easily confirm this by looking at a random B matrix.

In any case, this would never be an appropriate way to implement Lemke's algorithm. You should look into techniques for computing LU factorizations of a basis matrix and then updating the LU factorization as columns move in and out of the basis. See for example Chvatal's "Linear Programming" textbook for an introduction to this in the context of linear programming.

Related Question