Why (which advantages) we use different matrix factorization algorithms

lu decompositionmatrix decompositionnumerical linear algebranumerical methodsnumerical optimization

For the case of PA=LU factorization, I found some documents which tell that it may delete the probability of having 0's on the diagonal of Matrix A. But I am not sure if I got it right. If so, what is the problem of having 0's on the diagonal of Matrix A?

Other questions which are related, to which I couldn't find any easy explanation:

  • What are the advantages / disadvantages of LU factorization? And when to use it?
  • What are the advantages / disadvantages of PA=LU factorization? And when to use it?
  • What are the advantages / disadvantages of Cholesky factorization? And when to use it?
  • What are the advantages / disadvantages of QR factorization? And when to use it?

Thank you for any help!

Best Answer

Partial answer:

If you have a system $A x = b$, you can decompose the matrix $A$ to find $x$ that minimizes $\|Ax-b\|_2$. Analytically, the solution is $x = A^\dagger b$ where $A^\dagger$ is the pseudo-inverse of $A$. However, determining the pseudoinverse of $A$ analytically (perhaps as $A^\dagger = (A^T A)^{-1} A^T$ is numerically unstable.

So you can set $A=LU$ or $A=QR$ or $A=U\Sigma V^T$ where the last decomposition is the singular value decomposition. These decompositions permit numerically stable estimations of the psuedo-inverse, and can be used to solve the optimization problem. $LU$ is the fastest decomposition, but the least numerically stable. The SVD is the slowest decomposition, but permits the most numerically stable algorithm.

Related Question