I have some trouble with understanding the difference between partial and complete pivoting in Gauss elimination. I've found a few sources which are saying different things about what is allowed in each pivoting. From my understanding, in partial pivoting we are only allowed to change the columns (and are looking only at particular row), while in complete pivoting we look for highest value in whole matrix, and move it "to the top", by changing columns and rows. Is this correct, or am I wrong?
[Math] Gauss elimination: Difference between partial and complete pivoting
gaussian eliminationlinear algebramatricesnumerical methods
Related Solutions
If it was indeed $PA = PLU$, then $P$ would be useless. Just premultiply with $P^{-1} = P^T$ and you get $A = LU$. No, it has to be $PA = LU$.
You do it in steps:
Find the pivot row and get it to the first position, say by swapping it with the first row. This will give you a permutation $P_1$.
Do the first column elimination of $P_1 A$ to obtain $$P_1 A = L_1 A_1,$$ where $L_1$ is lower triangular with a possibly non-zero first column and $1$ on its diagonal, while $A_1$ has zeros below the diagonal in the first column.
Repeat on the $A_1[2:n,2:n]$ (i.e., $A_1$ minus its first row and column) to obtain $P_2$, $L_2$, $A_2$, and so on.
When done, multiply (or smartly combine) all $P_k$ and $L_k$ (extended to dimension $n$ by identity) to get $P$ and $L$, respectively. You get $U$ by combining the first rows from all $A_k$.
You should probably check out Crout and LUP algorithms, where a similar - but I think a somewhat better - algorithm is described.
A bit on the stability of LU
A question in the comments:
But you see, $PA = LU$ is the $LU$ decomposition of $PA$. I want the $LU$ decomposition of $A$. But since $A$ is not stable, from what I understood, I need to find $LU$ such that $A = LU$ by doing an LU factorization on $PA$. As in - find the $LU$ factorization of $A$, by running the $LU$ factorization of the stable $PA$. does that not make sense?
Sensible as it may seem, it is not.
Take any $A$ such that $A_{1,1} = 0$ and you will not be able to do LU at all.
Now, take $A_{1,1}>0$, but very small, say $A_{1,1}=10^{-15}$, while all other elements of $A$ are non-zero integers. See what happens.
This is where the instability comes from and the result itself is unstable, so there is no algorithm that can fix it because any algorithm would have to get to that same final solution (LU is unique if $A$ is nonsingular).
By introducing the partial pivoting on the left side of $A$, you are avoiding this effect.
Effectively, these permutations are only permuting equations of your system, nothing more. So your LU of $PA$ is really the Gaussian eliminations algorithm performed on the same equations system, only these equations are in a different order.
Similarly, partial pivoting of columns would be the same as permuting the variables (thus introducing the need to be careful which variable gets which value in the end). The full pivoting would permute both the equations and the variables, for the most stable way to solve the system.
Variable-wise, all of these (when they can be performed) give you the same solution to a system of linear equations. But, in finite arithmetic, some solutions produce more errors than others. So, you'd rather get $x_1 = x_2$ than $x_1 = 10^{17}(1 + 10^{-17}x_2 - 1)$. Notice that the latter would actualy give you $x_1 = 0$ if $x_2 \approx 1$, even though it is (in exact arithmetic) the same solution as $x_1 = x_2$.
That's why $A = LU$ can be a problem, not just they way how you computed it.
Row (Column) reducing a matrix is essentially a process whereby you do a series of elementary operations, which when taken together has the same effect as multiplying your equation (augmented system) by an invertible matrix (which is a product of elementary operations - row or column).
Now the distinction between a row operation and a column operation, is that a row operation is the same as multiplying on the left, eg $E_RA=E_RB$ whereas a column operation is the same as multiplying on the right, eg $AE_C=BE_C$. To be able to do any such an operation on an augmented system $[A : B]$ you must have that $A$ and $B$ have the same amount of rows (columns) for a row operation (column operation). So you cannot for example perform a column operation on an augmented system of $Ax=B$ where $A$ has multiple columns, and $B$ only one - within the context of solving the system for $x$ such an operation would be the same as trying to multiply $A$ and $B$ on the right with some elementary matrix - and the multiplication will be undefined for one of these matrices since they do not have the same amount of columns.
Now, just as you can row reduce the augmented system $[A : I]$ to $[I : A^{-1}]$ (if $A$ is invertible) you can also column reduce $\begin{bmatrix} A \\ \hline \\ I \end{bmatrix}$ to $\begin{bmatrix} I \\ \hline \\ A^{-1} \end{bmatrix}$. The use of augmenting with $I$ is that it allows you to retrieve the composition of elementary operations as a single invertible matrix.
What about both row and column operations on a matrix? Here the second observation I want to make, is that the whole point of augmenting and row or column reducing together, is that instead of multiplying one matrix and then the other with the same elementary matrix you do it all together in one step, but usually in the end (if you are not solving a linear system) you want to retrieve the nonsingular matrices composed of the series of elementary operations. So you want $P$ and/or $Q$ so that $PAQ$ is some meaningful equivalent matrix. Consider reducing
$\begin{bmatrix}A & I \\ I & 0 \end{bmatrix}$ to $\begin{bmatrix}B & P \\ Q & 0 \end{bmatrix}.$
(A needn't be square, but then the two identity submatrices are not the same size). What this means is that $PAQ=B$ and we say that $A$ and $B$ are equivalent.
If $A$ is invertible then you can also use both row and column operations:
Reduce $\begin{bmatrix}A & I \\ I & 0 \end{bmatrix}$ to $\begin{bmatrix}I & P \\ Q & 0 \end{bmatrix}.$
What this means is $PAQ=I$, so that $A^{-1}=QP$.
The one exception I need to mention where one can use both row and column operations on an augmented system $[A : I]$ is where $A$ is a symmetric and you reduce to $[D : Q^T]$ by doing a series of alternating column and row operations (column operation followed by exactly the same row operation, eg swap column 1 and 2 followed by swap row 1 and 2). Then in the end you will have $D=Q^TAQ$. Of course this only works because $A$ is symmetric.
Best Answer
Partial pivoting is about changing the rows of the matrix, effectively changing the order of the equations. Full pivoting also changes the variables order