[Math] What does this theorem in linear algebra actually mean

linear algebralinear-transformations

I've just began the study of linear transformations, and I'm still trying to grasp the concepts fully.

One theorem in my textbook is as follows:

Let $V$ and $W$ be vector spaces over the field $F$, and suppose that $(v_1, v_2, \ldots, v_n)$ is a basis for $V$. For $w_1, w_2, \ldots, w_n$ in $W$, there exists exactly one linear transformation $T: V \rightarrow W$ such that $T(v_i) = w_i$ for $i=1,2,\ldots,n$.

The author doesn't explain it, but gives the proof right away (which I understand). But I'm trying to figure out what this theorem actually states, and why it is so important? So in words it means: if I have a basis for my domain, and a basis for my codomain, then there exists just one linear transformation that links both of them.

So let's say I have a linear map $T: \mathbb{R}^2 \rightarrow \mathbb{R}^2$, with $T(1,0) = (1,4)$ and $T(1,1)=(2,5)$. So because $(1,0)$ and $(1,1)$ is a basis for my domain, it is an implication of the theorem that $(1,4)$ and $(2,5)$ is automatically a basis for my codomain?

Best Answer

The theorem says that any map from the finite set $\{v_1,\ldots,v_n\}$ to a vector space $W$ can be uniquely extended to a linear map $V\to W$; this is true if (and only if) $[v_1,\ldots,v_n]$ forms a basis of$~V$. It's importance is that it allows, at least in the case where $V,W$ are finite dimensional, any linear maps to be represented by finite information, namely by a matrix, and that every matrix so represents some linear map. In order to get there, we must also choose a basis in $W$; then by expressing each of the images $f(v_1),\ldots,f(v_n)$ in that basis, we find the columns of the matrix representing $f$ (with respect to $[v_1,\ldots,v_n]$ and the chosen basis of $W$). Note that this information only explicitly describes those $n~$images; the actual linear map is implicitly defined as its unique linear extension to all of$~V$. The existence part of the theorem ensures that we never need to worry whether there is actually a linear transformation that corresponds to a freely chosen matrix: one can always map $v_j$ to the vector represented by column$~j$, for all$~j$ at the same time.

It is only thanks to this theorem that we can work with matrices as if we work with the linear transformation they encode; as long as we fix our bases of $V$ and $W$, we have a bijection between linear transformations $V\to W$ on one hand and $m\times n$ matrices (where $m=\dim W$) on the other. In fact this bijection is itself linear, so an isomorphism of the $F$ vector spaces $\mathcal L(V,W)$ and $\operatorname{Mat}_{m,n}(F)$.

Related Question