Finding matrix representation of linear transformation – intuition

intuitionlinear algebralinear-transformations

Given a linear transformation $t : V \longrightarrow W$, we want to find a matrix $\mathbf{A}$ that represents this linear transformation.

I already have seen examples how this is done, but I'd like to improve my intuitive understanding of linear algebra, instead of following a predefined set of steps.

  1. So, in general, I understood that a given linear transformation, when applied to a vector space, is completely defined by where the basis vectors of the vector space go.

  2. I also understood that there can be four cases when we want to find the matrix representation of a given linear transformation $t : V \longrightarrow W$.

Case Base of domain $V$ Base of codomain $W$
1 standard standard
2 non-standard standard
3 standard non-standard
4 non-standard non-standard
  • For any case, to find the matrix representation of of $t$, we apply $t$ to the basis of the domain $V$.
  • For case 1 and 2, we are then done and can use these results of applying $t$ to the basis vectors of $V$ to create $\mathbf{A}$ column-wise.
  • For case 3 and 4, we have some more work to do and need to translate the images under $t$ of the base vectors of the domain $V$ into the base of codomain $W$. After that, based on these results, create $\mathbf{A}$ column-wise.

I get that for case 1 we don't need to do any translation, since domain and codomain have the same standard base. What's not intuitive to me: Why we don't need to do any translation in case of 2?

Example for case 2 from a book I have:

  • Let $t : \mathbb{R}^3 \longrightarrow \mathbb{R}^2$ with the rule $(x, y, z) \mapsto (x, y)$. Let the domain have the non-standard basis $E=\{(1, 1, 1), (1, 1, 0), (1, 0, 0)\}$ and the codomain have the standard basis for $\mathbb{R}^2$.

  • Then: $t(1, 1, 1) = \color{red}{(1, 1)}$ and $t(1, 1, 0) = \color{blue}{(1, 1)}$ and $t(1, 0, 0) = \color{green}{(1, 0)}$. Hence the matrix of $t$ with respect to the bases $E$ and $F$ is given by:

$$
\mathbf{A} = \begin{bmatrix}
\color{red}{1} & \color{blue}{1} & \color{green}{1} \\
\color{red}{1} & \color{blue}{1} & \color{green}{0}
\end{bmatrix}
$$

I find this curious that we don't need to translate the images under $t$ of the basis vectors of the domain $V$ into the basis of the codomain $W$ – because to my understanding, the images of the basis vectors of $V$ under $t$ would be still be using the basis of $V$? Or is my understanding here incorrect.

Best Answer

All those four cases are really just 1 case. Given vector space $U$ with basis $B = \{u_1, \dots, u_n\}$, define the coordinate isomorphism $J_B \colon \mathbb{F}^n \to U$ by $$J_Be_j = u_j \text{ for each } j \in \{1, \dots, n\}.$$

Suppose $T \colon V \to W$ is linear, $B_1 = \{v_1, \dots, v_n\}$ is a basis of $V$, and $B_2 = \{w_1, \dots, w_m\}$ is a basis of $W$. Then the matrix representation of $T$ with respect to these bases is $$M_{B_1}^{B_2}(T) = J_{B_2}^{-1}TJ_{B_1}.$$ So $M_{B_1}^{B_2}(T)$ takes in $B_1$-coordinates of a vector $v \in V$ and returns $B_2$-coordinates of $Tv$. Thus the $j$-th column of $M_{B_1}^{B_2}(T)$ is the $B_2$-coordinates of $Tv_j$.

Related Question