[Math] Explaining roundoff error when row reducing matrices

examples-counterexampleslinear algebranumerical linear algebranumerical methods

In my linear algebra textbook (in the context of row reducing and obtaining a matrix in echelon or reduced echelon form), there is a numerical note that reads as follows: "A computer program usually selects as a pivot the entry in a column having the largest absolute value. This strategy, called partial pivoting, is used because it reduces roundoff errors in calculations."

That is all it says. It doesn't give an actual explanation of why partial pivoting reduces roundoff error. I was wondering if someone might be able to explain this or give an example.

Best Answer

Consider the following matrix: $$M = \begin{bmatrix} 0.1 & & & & & 1 \\ -1 & 0.1 \\ & -1 & 0.1 \\ && -1 & 0.1 \\ &&& -1 & 0.1 \\ &&&&-1 & 1 \end{bmatrix},$$ where all the blank spaces are also zeros.

If you start doing gaussian elimination without pivoting, the $1$ in the upper right will keep moving down the rows, getting multiplied by $10$ each time, getting very large in the process. The LU factorization becomes, $$M = \begin{bmatrix} 1 \\ -10 & 1 \\ & -10 & 1 \\ && -10 & 1 \\ &&& -10 & 1 \\ &&&& -10 & 1 \\ \end{bmatrix} \begin{bmatrix} 0.1 &&&&& 1\\ & 0.1 &&&& 10\\ && 0.1 &&& 10^2\\ &&& 0.1 && 10^3\\ &&&& 0.1 & 10^4\\ &&&&& 10^5 + 1 \end{bmatrix}$$

Now, let's say your computer has floating point accuracy $10^{-5}$, so that the last entry $10^5+1$ gets rounded to $10^5$. After making this roundoff, if we multiply the matrices together again we get, $$\begin{bmatrix} 1 \\ -10 & 1 \\ & -10 & 1 \\ && -10 & 1 \\ &&& -10 & 1 \\ &&&& -10 & 1 \\ \end{bmatrix} \begin{bmatrix} 0.1 &&&&& 1\\ & 0.1 &&&& 10\\ && 0.1 &&& 10^2\\ &&& 0.1 && 10^3\\ &&&& 0.1 & 10^4\\ &&&&& 10^5 \end{bmatrix} = \begin{bmatrix} 0.1 & & & & & 1 \\ -1 & 0.1 \\ & -1 & 0.1 \\ && -1 & 0.1 \\ &&& -1 & 0.1 \\ &&&&-1 & 0 \end{bmatrix}$$ Note the difference from the original $M$ - the lower right entry got changed from a $1$ to a $0$. Whoops!

As an exercise, try performing gaussian elimination on this matrix with pivoting (things will come out much more nicely).


Note: in reality, floating point precision is around $10^{-8}$ for floats and $10^{-16}$ for doubles respectively.

Related Question