Example on basis change

change-of-basislinear algebralinear-transformationsvector-spaces

I am reading the book Mathematics for Machine Learning by Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong.

I have a problem understanding an example in this book:

Example 2.23 (Basis change) (page 53-54)

Consider a transformation matrix
$\bf{A}=\begin{bmatrix}2 & 1\\1 & 2\end{bmatrix}$
with respect to the canonical basis in $\mathbb{R}^2$. If we define a new basis $B=(\begin{bmatrix}1\\1\end{bmatrix},\begin{bmatrix}1\\-1\end{bmatrix})$ we obtain a diagonal transformation matrix $\tilde{\mathbf{A}}=\begin{bmatrix}3 & 0\\0 & 1\end{bmatrix}$ with respect to $B$, which is easier to work than with $\mathbf{A}$.

May I ask why we could obtain $\tilde{\mathbf{A}}$ as above?
Thank you very much.

Best Answer

Let $v_1=\begin{bmatrix}1\\1\end{bmatrix}$ and $v_2=\begin{bmatrix}1\\-1\end{bmatrix}$. We have \begin{align} \mathbf Av_1 &= \begin{bmatrix}2 & 1\\1 & 2\end{bmatrix}\begin{bmatrix}1\\1\end{bmatrix} = \begin{bmatrix}3\\3\end{bmatrix} = \color{red}{3}v_1+\color{red}{0}v_2, \\ \mathbf Av_2 &= \begin{bmatrix}2 & 1\\1 & 2\end{bmatrix}\begin{bmatrix}1\\-1\end{bmatrix} = \begin{bmatrix}1\\-1\end{bmatrix} = \color{blue}{0}v_1+\color{blue}{1}v_2. \end{align} Hence, with respect to the basis $B=(v_1,v_2)$ in both domain and codomain of the map $x\mapsto\mathbf Ax$, we obtain the matrix $$ \mathbf{\tilde A} = \begin{bmatrix} \color{red}3 & \color{blue}0 \\ \color{red}0 & \color{blue}1 \end{bmatrix}. $$

Related Question