Linear Algebra – Pivoting on a Matrix Element

linear algebramatrices

I'm confused about the concept of pivoting a Matrix element. This operation seems to be a fundamental operation which is used as part of more complex operations such as inverting a matrix.

However, the definition of pivoting, and what it entails, seems to vary depending on the context.

In the context of inverting a matrix, for example, pivoting entails changing the pivot element to 1, and then all other elements in the same column to 0 (and appropriately adjusting the other elements in the same row/column.) However, other times the definition of pivoting only involves changing other elements in the same column to 0, while leaving the pivot element as it is.

Is there a universal definition of pivoting, or is it simply a casual term which changes depending on context? Or, to put it another way: must the pivot element always be a 1?

Best Answer

One can better understand the role that pivoting plays in Gaussian elimination by viewing it from more general perspectives. For example one can compare analogous elimination algorithms over rings (vs. fields), e.g. Hermite / Smith normal forms. Additionally one can compare the the more general choice of "critical pair" pivots in non-linear elimination algorithms such as Grobner basis algorithms, or the more general Knuth-Bendix equational completion, etc. Here an optimal choice of "pivoting" / critical pair strategy, can prove crucial to tractable computation (e.g. to avoid combinatorial explosion). The Knuth-Bendix algorithm provides a fairly universal point of view that encompasses all these elimination algorithms.