Linear Algebra – Understanding Pivot Numbers in LU Decomposition with Examples

linear algebramatrices

studying many pages like wikipedia, wolfram, Mathworks, Math Stack Exchange, lu-pivot, LU_Decomposition I couldn't find exactly whart are the pivot numbers in a specific example:
for example what are the pivot numbers in this example:

enter image description here
(I've extracted the LU-decomposition above based on an example in this pdf.)
or this one:
enter image description here
Can you explain me how to extract the pivot numbers in both the examples above?
Also if it is possible, can you explain me about these questions alittle?

  • How can we extract pivot numbers in various forms of pivoting. e.g.
    LU factorization with Partial Pivoting ( PA = LU ) , LU
    factorization with full pivoting ( PAQ = LU )
    , LDU decomposition (
    A = LDU )
    ?
  • How can we understand what permutation matrix ( P ) should be
    multiplied by ( A ) to be able to extract a stable LU-decomposition
    before starting to decompose?

Best Answer

There are a lot of theory items in your question and you have to review when to use which approach and why, plus the pros and cons of these methods (there is not enough room in the margins to add all of that here). It should made clear that you can not always get a factorization without resorting to partial or full pivoting and there can also be scaling and performance issues. It is important to understand all of these nuances. Also, the pivoting strategies are described in any numerical analysis texts using Gaussian Elimination.

I am going to make some comments about your first example, let you work through that one since it is easier. I will also provide an answer to your second question. For the rest, you are going to have to read up on them and when they are needed and used.

Problem 1:

Lets look at your example, we are given:

$$A = \begin{bmatrix} 1 & 2 & 3 \\ 2 & -4 & 6 \\ 3 & -9 & -3 \end{bmatrix}$$

Solution 1:

The link in this first problem (you will have to manually work through the example) provided the approach that instructor used. If we take the $LU$ produced by those steps, they did not have any row permutations at all (row swaps or column swaps for that matter). In other words $P = I$, the $3\times3$ identity matrix. The pivot items are the numbers under the superdiagonal of $L$.

Also, they used Crout's method, which puts ones in the diagonal entries of $U$, as opposed to Doolittle's Method (which puts ones in the diagonal entries of $L$).

You can verify that $A = LU$ directly.

Solution 2:

However, we could have also used this method (Doolittle's):

$$\begin{bmatrix} 1 & 0 & 0 \\ l_{21} & 1 & 0 \\ l_{31} & l_{32} & 1 \end{bmatrix} \cdot \begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \end{bmatrix} = \begin{bmatrix} 1 & 2 & 3 \\ 2 & -4 & 6 \\ 3 & -9 & -3 \end{bmatrix}$$

Solving for each of the variables, in the correct order yields:

  • $u_{11} = 1, u_{12} = 2, u_{13} = 3$
  • $l_{21} = 2, u_{22} = -8, u_{23} = 0$
  • $l_{31} = 3, l_{32} = \dfrac{15}{8}, u_{33} = -12$

So, for this choice we have:

$$A = \begin{bmatrix} 1 & 2 & 3 \\ 2 & -4 & 6 \\ 3 & -9 & -3 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 3 & \dfrac{15}{8} & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 2 & 3 \\ 0 & -8 & 0 \\ 0 & 0 & -12 \end{bmatrix}$$

We could have also used Crout's or Choleski's Method for the $LU$ approach.

Solution 3:

We might decide we do not want a direct factorization because we are concerned of growth of rounding errors.

In this case, we do two permutations and can have $PA = LU$ (do you see that $P$ is just keeping track of our row swaps). For example, we could get:

$$PA = \begin{bmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix} \cdot \begin{bmatrix} 1 & 2 & 3 \\ 2 & -4 & 6 \\ 3 & -9 & -3 \end{bmatrix} = LU = \begin{bmatrix} 1 & 0 & 0 \\ \dfrac{1}{3} & 1 & 0 \\ \dfrac{2}{3} & \dfrac{2}{5} & 1 \end{bmatrix} \cdot \begin{bmatrix} 3 & -9 & -3 \\ 0 & 5 & 4 \\ 0 & 0 & \dfrac{32}{5} \end{bmatrix}$$

Problem 2:

We are given:

$$A = \begin{bmatrix} 2 & 1 & 1 & 0 \\ 4 & 3 & 3 & 1 \\ 8 & 7 & 9 & 5 \\ 6 & 7 & 9 & 8 \end{bmatrix}$$

Step 1

We write $P$ and $L$ as the $4\times4$ identity matrix and let $U = A$, so have:

$$U = \begin{bmatrix} 2 & 1 & 1 & 0 \\ 4 & 3 & 3 & 1 \\ 8 & 7 & 9 & 5 \\ 6 & 7 & 9 & 8 \end{bmatrix}, ~L = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, ~P = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$

The first step of factorization process is to determine the entry of largest magnitude in column $1$. This is the entry $8$ in row $3$. We therefore swap rows $1$ and $3$ of the matrices $U$ and $P$ to obtain:

$$U = \begin{bmatrix} 8 & 7 & 9 & 5 \\ 4 & 3 & 3 & 1 \\ 2 & 1 & 1 & 0 \\ 6 & 7 & 9 & 8 \end{bmatrix}, ~L = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, ~P = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$

Step 2

We then subtract suitable multiples of row $1$ of $U$ from rows $2$ through $4$, to create zeros in the first column of $U$ below the diagonal. The negative of the multiples are stored in the subdiagonal entries of the first column of the matrix $L$. These operations are: $-1/2 R1 + R2 \rightarrow R2, -1/4 R1 + R3 \rightarrow R3, -3/4 R1 + R4 \rightarrow R4$. This gives the matrices:

$$U = \begin{bmatrix} 8 & 7 & 9 & 5 \\ 0 & -1/2 & -3/2 & -3/2 \\ 0 & -3/4 & -5/4 & -5/4 \\ 0 & 7/4 & 9/4 & 17/4 \end{bmatrix}, ~L = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 1/2 & 1 & 0 & 0 \\ 1/4 & 0 & 1 & 0 \\ 3/4 & 0 & 0 & 1 \end{bmatrix}, ~P = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$

Step 3

We now proceed to the entries $2$ through $4$ of column $2$ of $U$, and note that column two's last entry (3rd row) is of largest magnitude. Hence, we interchange rows $2$ and $4$ of the matrices $U$ and $P$. We also need to swap the subdiagonal entries of rows $2$ and $4$ of $L$. This gives us:

$$U = \begin{bmatrix} 8 & 7 & 9 & 5 \\ 0 & 7/4 & 9/4 & 17/4 \\ 0 & -3/4 & -5/4 & -5/4 \\ 0 & -1/2 & -3/2 & -3/2 \end{bmatrix}, ~L = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 3/4 & 1 & 0 & 0 \\ 1/4 & 0 & 1 & 0 \\ 1/2 & 0 & 0 & 1 \end{bmatrix}, ~P = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix}$$

Step 4

We subtract suitable multiples of row $2$ from rows $3$ and $4$ to obtain zeros in the second column of $U$ below the diagonal. The negative of the multiples are stored in the subdiagonal entries of the second column of the matrix $L$. These operations are: $3/7 R2 + R3 \rightarrow R3, 2/7 R2 + R4 \rightarrow R4$. This gives us:

$$U = \begin{bmatrix} 8 & 7 & 9 & 5 \\ 0 & 7/4 & 9/4 & 17/4 \\ 0 & 0 & -2/7 & 4/7 \\ 0 & 0 & -6/7 & -2/7 \end{bmatrix}, ~L = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 3/4 & 1 & 0 & 0 \\ 1/4 & -3/7 & 1 & 0 \\ 1/2 & -2/7 & 0 & 1 \end{bmatrix}, ~P = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix}$$

Step 5

We turn to the last $2$ entries of column $3$ of $U$, and notice that the last entry has the largest magnitude. Hence, we swap rows $3$ and $4$ of the matrices $U$ and $P$, and we interchange the subdiagonal entries in rows $3$ and $4$ of $L$ to obtain:

$$U = \begin{bmatrix} 8 & 7 & 9 & 5 \\ 0 & 7/4 & 9/4 & 17/4 \\ 0 & 0 & -6/7 & -2/7 \\ 0 & 0 & -2/7 & 4/7 \end{bmatrix}, ~L = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 3/4 & 1 & 0 & 0 \\ 1/2 & -2/7 & 1 & 0 \\ 1/4 & -3/7 & 0 & 1 \end{bmatrix}, ~P = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix}$$

Step 6

The final step of the factorization is to $-1/3 R3$ of $U$ from row $4$ and store $1/3$ in the subdiagonal entry of column $3$ of $L$. This yields:

$$U = \begin{bmatrix} 8 & 7 & 9 & 5 \\ 0 & 7/4 & 9/4 & 17/4 \\ 0 & 0 & -6/7 & -2/7 \\ 0 & 0 & 0 & 2/3 \end{bmatrix}, ~L = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 3/4 & 1 & 0 & 0 \\ 1/2 & -2/7 & 1 & 0 \\ 1/4 & -3/7 & 1/3 & 1 \end{bmatrix}, ~P = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix}$$

Of course, we can verify that:

$$PA = LU$$

We also know that a permutation matrix has $P^T P = I$, so we can also verify:

$$A = P^T L U$$

I recommend you do the Crout Method as I showed for the Doolittle Method, with the easier $3\times3$ since you have the solution. Use my approach as opposed to what your solution shows. You might also want to manually crank Solution $3$, since you have this example and see if you get all the pivots, $P$, $L$, $U$ and the same solution since you this detailed $4\times4$ example.

Update 1:

Also, I think you did not find pivoting details because those are typically covered as part of Gaussian Elimination. See, for example:

Update 2:

Related Question