[Math] How to find the change of coordinates matrix from a given matrix to the standard basis

linear algebramatrices

matrix

I'm not sure how to approach this problem. The examples I've come across on the internet show how to find the change of coordinates matrix from a matrix to another matrix, such as B to C (for example).

I came up with an answer but I'm not sure if it's correct.

I started out with the matrix with those three vectors mentioned above:

3    2    1
0    2   -2
6   -4    3

Then I found the inverse which is the following:

1/21    5/21     1/7
2/7    -1/14    -1/7
2/7    -4/7     -1/7

Then I multiplied by the standard basis of a 3×1 vector:

1/21    5/21     1/7         1
2/7    -1/14    -1/7    *    0
2/7    -4/7     -1/7         0

And came up with the following answer:

1/21
2/7
2/7

Is this correct? Something tells me I'm missing something or perhaps I approached the whole thing incorrectly.

Best Answer

First, make sure you understand what it means to write a vector $v$ in the basis $B$. In the standard basis, $S$, the vector $(1,2,3)_S$ is the linear combination $$1\cdot \left[\begin{array}{c}1\\0\\0\end{array}\right] + 2\cdot \left[\begin{array}{c}0\\1\\0\end{array}\right] + 3\cdot \left[\begin{array}{c}0\\0\\1\end{array}\right] $$ which is the same as the matrix multiplication problem: $$ \left[ \begin{array}{ccc} 1&0&0\\0&1&0\\0&0&1 \end{array}\right] \left[\begin{array}{c}1\\2\\3\end{array}\right].$$

In the basis $B$, the vector $(1,2,3)_B$ is the linear combination $$1\cdot \left[\begin{array}{c}3\\0\\6\end{array}\right] + 2\cdot \left[\begin{array}{c}2\\2\\-4\end{array}\right] + 3\cdot \left[\begin{array}{c}1\\-2\\3\end{array}\right] = \left[ \begin{array}{ccc} 3&0&6\\2&2&-4\\1&-2&3 \end{array}\right] \left[\begin{array}{c}1\\2\\3\end{array}\right].$$

In general, a the matrix $T$ of a basis can be used to change a vector $v_T$ in the basis to the standard basis $S$ via $T\cdot v_T = v_S$.

(This agrees with the fact that $I\cdot v_S = v_S$.)

In this case, a vector represented in both $S$ and $B$ would satsify $B\cdot v_B = I \cdot v_S$ and $v_B = B^{-1} \cdot v_S$.

This shows how $B$ and $B^{-1}$ are the matrices to go back and forth from $B$ to the standard basis.