Why is it that when finding the matrix representation of a linear transformation $T: V \rightarrow W$ where $V$ has basis $\{v_1, v_2, …, v_n\}$ and $W$ has basis $\{w_1,w_2,…,w_m\}$, the resulting matrix sometimes has columns $T(v_1), T(v_2),…,T(v_n)$ and other times it has columns representing the coefficients of the linear transformation required to map the $w_k$ onto each $T(V_k)$?
Creation of a matrix representation of a linear transformation
linear algebralinear-transformationsmatrices
Related Solutions
I'll suppose the rank of a linear transformation $L$ is defined as the dimension of its image $L(V)=\{\, L(v)\mid\,v\in V\}$ and that the rank of an $m\times n$ matrix$~A$ is defined as the dimension of the subspace of$~K^m$ (with $K$ the base field, you may take $K=\Bbb R$) spanned by its columns.
First note that if $\{ v_1,\ldots, v_n\}$ is a basis of$~V$, then the vectors $L(v_1),\ldots, L(v_n)\in W$ span $L(V)$, since $L$ respects linear combinations. Also, the matrix $A$ above defines a linear transformation $L_A:K^n\to K^m$ by $L_A(v)=A\cdot v$. Since column$~j$ of $A$ is just $A\cdot e_j$ where $\{e_1,\ldots,e_n\}$ is the standard basis of $K^n$, we have that the subspace spanned by the columns of$~A$ is the subspace of $K^m$ spanned by $L_A(e_j)$ for $j=1,\ldots,n$, which by the above is the image of$~L_A$. So according to the definitions, the rank of $A$ is the rank of$~L_A$.
Finally the bases of the question define isomorphisms $\phi_v:V\to K^n$ (finding coordinates of any vector in $V$ in the basis $\{ v_1,\ldots, v_n\}$) and $\psi_w:K^m\to W$ (using the $m$-tuple of scalars as coordinates in the basis $\{ w_1,\ldots, w_m\}$ to get a vector of$~W$), and the matrix $A$ of$~L$ with respect to these bases is by definition such that $L=\psi_w\circ L_A\circ\phi_v$ (given $v\in V$ find its coordinates by$~\phi_v$, multiply those by$~A$, and interpret the result as a vector of$~W$ via$~\psi_w$). The isomorphism $\psi_w$ then maps the image of $L_A$ to that of $L$, so the two linear transformations have the same rank, which is also the rank of $A$.
Let $T: V \rightarrow V$ be a linear transformation and $\beta = \{ v_1, \dots, v_n \}$ be a basis constructed as to place $\{ v_1, \dots , v_k \}$ in $W$. Define $$ [T]_{\beta} = [[T(v_1)]_{\beta}|\cdots | [T(v_k)]_{\beta}|[T(v_{k+1})]_{\beta}|\cdots | [T(v_n)]_{\beta}] $$ Here $[c_1v_1+ \cdots +c_nv_n]_{\beta} = [c_1, \cdots , c_n]^T$. This is the $\beta$-coordinate map. By the $T$-invariance of $W$ we have the existence of $a_{ij}$ for $1 \leq i,j \leq k$ such that $$ T(v_j) = \sum_{i=1}^k a_{ij}v_i $$ now, just to emphasize what's happening, I'll focus on $j=1$, $$ T(v_1) = \sum_{i=1}^k a_{i1}v_i = a_{11}v_1+ \cdots +a_{k1}v_k+0v_{k+1}+ \cdots +0v_n $$ of course we can easily read off the coordinates of $T(v_1)$ given the expression above: $$ [T(v_1)]_{\beta} = [a_{11}, \dots, a_{k1}, 0, \dots , 0]^T$$ Likewise, for $j=2, \dots k$ we find $k$-possibly nonzero coefficients in the image paired with $n-k$ necessary $0$'s. In contrast, we get to say nothing about $T(v_{k+1})$ to $T(v_n)$. So, what does that mean?
Best Answer
Let $\textsf{T}: \textsf{V}\to \textsf{W}$ be a linear transformation and let $\beta =\{v_1,v_2,\dots,v_n\}$ and $\gamma =\{w_1,w_2,\dots,w_m\}$ basis for $\textsf{V}$ and $\textsf{W}$, respectively.
To compute the matrix associated with $\textsf{T}$ respect to $\beta$ and $\gamma$, we always, that is to say always, obtain the vector $\textsf{T}(v_j)$ and put them as a linear combination of the elements that conform $\gamma$, like this : $$\textsf{T}(v_j)=A_{1j}w_1+A_{2j}w_2+\cdots+A_{mj}w_m$$ then, we put the coefficients $$\begin{pmatrix} A_{1j} \\ A_{2j} \\ \vdots \\ A_{mj} \end{pmatrix}$$ in the $j$-th column of the matrix $A:= [\textsf{T}]_{\beta}^{\gamma}$ (for $j=1,2,\dots,n$).
Now, for your specific doubt, consider $\textsf{T}: \mathbb{R}^2 \to \mathbb{R}^2$ given by $\textsf{T}(a,b)=(-b,a)$ (doesn't matter) and the basis $\beta=\{(2,1),(3,5)\}$ and $\gamma=\{e_1,e_2\}$ (the standard basis).
Above I said, we must first to compute $\textsf{T}(2,1)$ and the result put it as a linear combination of the elements in the second basis. In this case $$\textsf{T}(2,1)=(-1,2)=(-1)e_1+2e_2$$ As you can see, the coefficients that we will put in the first column of the matrix will look exactly like the vector $(-1,2)$ (maybe this is what you meant). To finish, note also that $$\textsf{T}(3,5)=(-5,3)=(-5)e_1+3e_2$$ So $$[\textsf{T}]_{\beta}^{\gamma} =\begin{pmatrix} -1 & -5 \\ 2 & 3 \end{pmatrix}$$
(an apology for my english, I hope this has clarified your doubt a bit)