MATLAB: Matrix inversion using “pinv” or any other technique

decompositioninvlinear equationsMATLABmatrix inversionpinvsle

I am trying to solve a system of independent linear equations.
Ax = B for x.
Many functions within Matlab achieve this with different algorithms. mldivide or '\' operator, 'lsqminnorm' and 'pinv' are the ones I have tried using. For my purpose, pinv seems to be the fastest and relatively good in accuracy.
I am trying to understand which of the equations or part of the matrices are used for computing the inverse of A, i.e., (A^-1)
Let us take an example where A is a matrix of dimensions 500*250. And B is a vector of dimension 500*1. I have two prominent and probably very obvious questions:
1. Now when the inversion of A is performed which of the 500 equations is used to obtain solutions to the 250 variables? 2. How does the 'pinv' function decide which of these equations to choose from?
Kindly advice.

Best Answer

The answer to this question is to take a recent course on numerical linear algebra. Literally so. OR, do a lot of reading, because writing a complete answer here will take the equivalent to writing a book on the subject. So at best I'll just try to point you in a good direction.
Interestingly, you would want to take a recent course on numerical linear algebra, or at least, read a rather recent book on the subject. :-) I say this because tools like lsqminnorm are relatively new. (For example, there was no discussion of the ideas behind LSQMINNORM when I was in grad school, only in the late 1980s.) In fact, it was only introduced in R2017b in MATLAB. For example, a pretty decent read on the topic of the underpinnings of lsqminnorm might be found here:
which dates to only 2009/2010. Personally, I'd suggest reading that reference carefully. It is a pretty decent read, at least to me, and it discusses the underpinnings of PINV, as well as LSQMINNORM. I was surprised to not even see a Wikipedia page for the COD, which forms the basis for LSQMINNORM.
The reason why I said to take an entire CURRENT course on numerical linear algebra, is because you might need that as background to appreciate why tools like inv, backslash, etc., fail on numerically singular problems.
PINV will not in general be the fastest tool. In fact, PINV will generally be the slowest of the available tools, because it relies on the SVD and SVD will be relatively slow for large matrices. Also, PINV does not apply to sparse matrices, while lsqminnorm is able to handle sparse matrices.
You ask which parts of A these tools use. In fact, the goal is to use all of the matrix, but in subtly different ways.
So, you are talking about a non-square matrix, of size 500x250. INV is not even an option, and we cannot compute the inverse of A ever. It does not exist for non-square matrices. At best, you can compute a generalized inverse of some sort. In the past (and, yes numerical linear algebra has changed over the last 10 to 40 years or so) this usually came down to tools that were based on the SVD, so PINV. LSQMINNORM uses the COD (Complete Orthogonal Decomposition), which you might think of as a QR decomposition on steroids. (I wonder if Christine smiles when she reads that.) But because it uses the COD, it should be faster than PINV, since you can compute the COD from just a fixed set of householder rotations on the result of QR.
Sadly, I recognize that I've not really said much here, but what I did say took half a page to write. Those lecture notes I provided a link to are a very good start. Happy reading. If you are very lucky, perhaps Christine will clear up some of what I have glossed over.