How do column transformations on $n×n$ matrix affect the final inverse matrix

inverselinear algebramatrices

$\mathbb{A}$ is $n × n$ invertible matrix, and $\mathbb{A}^{-1}$ is its inverse. $\mathbb{B}$ is a matrix which we got by applying several row transformations on $\mathbb{A}$.
$\space \mathbb{B}^{-1}$ is the inverse matrix of $\mathbb{B}. \space $
How can we show that $\space \mathbb{B}^{-1}$ can be obtained from $\mathbb{A}^{-1}$ by a certain column transformations and how can we describe these transformations?

I think I understand that in order to get $\mathbb{B}^{-1}$ from $\mathbb{A}^{-1}$, when we multiplied a row of $\mathbb{A}$ means we have to divide the respective column of $\mathbb{A}^{-1}$. And also that switching two rows of $\mathbb{A}$ should not really change the inverse of either of them. But I don't really understand why, and I also do not understand what adding a multiple to a row of another row of $\mathbb{A}$ would do.

Best Answer

Applying a sequence of $k$ row-operations to $A$ is the same as computing the product $$ B = E_k \cdots E_2 E_1 A, $$ where $E_j$ is the elementary matrix corresponding to the $j$th row-operation. On the other hand, we find that $$ B^{-1} = (E_k \cdots E_2 E_1 A)^{-1} = AE_1^{-1} E_2^{-1} \cdots E_{k}^{-1}. $$ This product corresponds to taking $B$ and applying the column operation associated with $E_1^{-1},$ then the column operation associated with $E_2^{-1}$, and so on.