[Math] Pivots and singular cases in Gaussian Elimination

linear algebra

I'm familiar with Gaussian Elimination from doing it using row operations to solve systems of linear equations in the past, but I've started reading a formal textbook on it, namely "Linear Algebra and its Applications" by Gilbert Strang but I'm having a bit of trouble understanding some details he was mentioning about pivot elements and singular cases. I think by explaining what I think I know about what I'm doing can in turn clue others in as to where I'm getting confused.

Basically, as far as I'm concerned, when we have a system of linear equations, like, off the top of my head and just for illustration,

$$x+y+z = 5$$
$$3x+2y-5z=20$$
$$-4x-2y+z=3$$ the vector-space interpretation is that if $x, y$ and $z$ are variables in 3 different equations with varying coefficients and numerical solutions, it can be interpreted in the column picture as $x$, $y$, and $z$ being scalars needed to produce a vector that is formed by the constants on the $RHS$ of the 3 equations in column form. So it's the required scalars for each $x, y$ and $z$ needed to create the $RHS$ vector through vector addition. Hopefully this wasn't a complete word salad.

In a row picture interpretation, it's the $x$, $y$, and $z$ value or values that specify the intersection of the three planes given by the 3 equations.

Now, in elimination, there is something I'm being told about called pivot elements, and from what I gather, they're basically when, at least when you're using Gaussian Elimination with matrices, the entry in a matrix you plan on combining with a newly operated on row to sum to 0, so that it leads you to isolating one variable to one solution, and then allows you to use back-substitution to solve the system of linear equations. For instance, if I, in the system above, subtracted 3 times equation 1 from equation 2, $3$ would be a pivot element as it causes $x=0$ in a row.

However, Strang notes that "we divide by them" in reference to pivot elements, so I think my idea of what a pivot element therefore must be false.

Proceeding, he mentions that in a singular case, a $0$ occurs in a pivot position, and elimination then must stop.

I'm not exactly sure what that means, but he gave an example of a system of linear equations that couldn't be solved by elimination:

enter image description here

I myself can't use elimination on this from my own experience because I can't get zeroes on the bottom left entry, middle left entry and bottom middle entry — I end up with two zeroes in the middle left and middle middle (for lack of a better description) entry, as well as two zeros in the bottom left and bottom middle entries. This kind of screws things up when I solve it, so I can see why it's unsolvable but not what he means by "$0$ being in a pivot position" and since we apparently divide by pivot elements this causes a problem, since we cannot divide by zero.

All of this aside, I have two main questions:

  • What of my pivot element interpretation is wrong, and how then is it that we divide them?

  • What does he mean by "$0$ being in a pivot position" causes a singular case in, if possible, the context of how I found eliminating that example of a system of linear equations he provided impossible?

I apologize if this is a lot to answer, but I felt the questions weren't too long-winded to address to necessitate splitting each question into its own thread.

Best Answer

• So basically, when you obtain a Triangular system you have to do a process of back substitution to obtain the value of the variables.

Now, if the Pivot element is $0$ then the solution would be indeterminable. Hence, Pivot elements can't be $0$ by definition.

• While manipulating the original system into a triangular system, we take the pivot element of the $1^{st}$ equation to make the $a_{n1} = 0$ and so on and so forth. So, while you are unable to obtain a non-zero pivot element to make $a_{n2} = 0$ for the rest of the equations, you can't go further with Gaussian elimination.

Related Question