Is a Change of Basis Matrix equivalent to the matrix inverse in this case

change-of-basislinear algebra

I was looking at how to construct a change of basis so that when given a system of linear equations one could change the associated matrix into a diagonal matrix — thus making the system easier to solve.

Assume a $n \times n$ matrix $A$ has all linearly independent columns that form a basis in the vector space. Then if we found a change of basis matrix using those linearly independent columns that should yield the identity matrix, correct? Each column in $A$ is now represented by itself as a basis element, so this should yield the identity matrix. We would also have to convert the $n \times 1$ solution vector to this new basis, and that would be done by multiplying by the change of basis matrix.

It would seem the process I've described is exactly how an inverse matrix would act when trying to solve a system of equations. In particular, the change of basis matrix would change from the standard basis to the basis that is the set of n-linearly independent columns in the original matrix A.

Thanks in advance for reviewing!

Best Answer

Indeed, multiplying by $A^{-1}$ changes basis from the standard basis to the basis consisting of columns of $A$. This useful fact is emphasized in Trefethen's popular textbook Numerical Linear Algebra.

If $x = A^{-1} y$, then $y = Ax$. This tells us that $y$ can be written as a linear combination of the columns of $A$, using the coefficients stored in $x$. In other words, $x$ is the coordinate vector of $y$ with respect to the basis consisting of columns of $A$.

Related Question