Creation of a matrix representation of a linear transformation

linear algebralinear-transformationsmatrices

Why is it that when finding the matrix representation of a linear transformation $T: V \rightarrow W$ where $V$ has basis $\{v_1, v_2, …, v_n\}$ and $W$ has basis $\{w_1,w_2,…,w_m\}$, the resulting matrix sometimes has columns $T(v_1), T(v_2),…,T(v_n)$ and other times it has columns representing the coefficients of the linear transformation required to map the $w_k$ onto each $T(V_k)$?

Best Answer

Let $\textsf{T}: \textsf{V}\to \textsf{W}$ be a linear transformation and let $\beta =\{v_1,v_2,\dots,v_n\}$ and $\gamma =\{w_1,w_2,\dots,w_m\}$ basis for $\textsf{V}$ and $\textsf{W}$, respectively.

To compute the matrix associated with $\textsf{T}$ respect to $\beta$ and $\gamma$, we always, that is to say always, obtain the vector $\textsf{T}(v_j)$ and put them as a linear combination of the elements that conform $\gamma$, like this : $$\textsf{T}(v_j)=A_{1j}w_1+A_{2j}w_2+\cdots+A_{mj}w_m$$ then, we put the coefficients $$\begin{pmatrix} A_{1j} \\ A_{2j} \\ \vdots \\ A_{mj} \end{pmatrix}$$ in the $j$-th column of the matrix $A:= [\textsf{T}]_{\beta}^{\gamma}$ (for $j=1,2,\dots,n$).

Now, for your specific doubt, consider $\textsf{T}: \mathbb{R}^2 \to \mathbb{R}^2$ given by $\textsf{T}(a,b)=(-b,a)$ (doesn't matter) and the basis $\beta=\{(2,1),(3,5)\}$ and $\gamma=\{e_1,e_2\}$ (the standard basis).

Above I said, we must first to compute $\textsf{T}(2,1)$ and the result put it as a linear combination of the elements in the second basis. In this case $$\textsf{T}(2,1)=(-1,2)=(-1)e_1+2e_2$$ As you can see, the coefficients that we will put in the first column of the matrix will look exactly like the vector $(-1,2)$ (maybe this is what you meant). To finish, note also that $$\textsf{T}(3,5)=(-5,3)=(-5)e_1+3e_2$$ So $$[\textsf{T}]_{\beta}^{\gamma} =\begin{pmatrix} -1 & -5 \\ 2 & 3 \end{pmatrix}$$

(an apology for my english, I hope this has clarified your doubt a bit)

Related Question