[Math] Relation between linear maps and matrices

linear algebramatrices

I've been reading Axler's "Linear Algebra Done Right", and have learned more about linear operators/ maps, but I'd like to make sure that I understand how to properly relate this information to matrices.

First, any $m \times n$ matrix with entries in a field $F$ uniquely determines a linear transformation $T: F^n\to F^m$ by $(x_1,…,x_n)\mapsto (\sum_{j=1}^{n}a_{1j}x_j,…,\sum_{j=1}^{n}a_{mj}x_j)$, and if $T:V\to W$ is a linear map between finite dimensional vector space $V$ and $W$ over field $F$ and we fix a basis $B_1$ in $V$ and $B_2$ in $W$, with $dim(V)=n$ and $dim(W)=m$, then $M(T)$ (function that maps $T$ to its matrix with respect to $B_1$ and $B_2$ by the procedure outlined above) is an isomorphism between $L(V,W)$ (vector space of linear maps between $V$ and $W$) and $M$ at $(m,n,F)$ (vector space of $m \times n$ matrices with entries in $F$).

QUESTIONS

1.) If I'm given a matrix with entries in $F$, how exactly would I go about determining information about it from linear maps? For example, suppose that i'm given an $n \times n$ matrix $A$ where $n$ is odd. I know that if $T$ is an operator on an odd-dimensional vector space $V$, then $T$ has an eigenvalue. Since $T$ can be any operator on any odd-dimensional vector space, can I just pick $V=F^n$? Then I could say that $A$ represents a unique linear transformation $T:F^n\to F^n$ by the assignment above, and since $T$ has an eigenvalue, $A$ must also have (the same) eigenvalue (since $T$ and $A$ are the same transformation on $F^n$)? Furthermore, if I assume the standard basis for $F^n$, then there is no other such matrix ($m \times n$ and entries in $F$) that represents the particular operator that $A$ does, and every possible operator on $F^n$ is represented by a matrix in $M$ at $(n,n,F)$. Is this all correct? Would similar arguments apply to invertibility, similarity, etc. ?

2.) Since an $m \times n$ matrix $A$ with entries in $F$ represents a unique linear map $T: F^n\to F^m$, and since finite dimensional vector spaces with the same dimension are isomorphic, can't I also interpret $A$ as representing a unique linear map $T: V\to W$, where $V$ and $W$ are any vector spaces over $F$ s.t. $dim(V)=n$ and $dim(W)=m$?

Best Answer

You are correct on some accounts but there seems to be a bit of confusion as well. Let us first address the example in your first question.

Suppose we have a linear operator $T:V\rightarrow V$ where $\dim V = n$ for odd $n$. Let us fix a basis $\mathcal{B}$ for $V$ and let $A = [T]_\mathcal{B}$ be the matrix of the mapping with respect to $\mathcal{B}$. As you've said, the map $[\ ]_\mathcal{B}: L(V)\rightarrow M_n(\mathbb{R})$ is a vector space isomorphism between the space of operators on $V$ and the space of $n\times n$ matrices.

First of all, note that you are not "free to choose" $V$ to be $\mathbb{R}^n$. $T$ is already defined to be a linear operator on $V$ and in this case $V$, whatever it is, is fixed. However, the power of interpreting the mapping as a matrix is that we can effectively carry out all the calculations as if the mapping were from $\mathbb{R}^n$ to $\mathbb{R}^n$: This is precisely what an isomorphism allows us to do.

For example. suppose we have a linear mapping $T$ on $P_2(\mathbb{R})$, the vector space of polynomials with real coefficients of degree at most $2$: $$T(ax^2 + bx + c) = bx + c$$ In this case, our vector space $V$ is $P_2(\mathbb{R})$. We are not free to change it. However, what we are allowed to do is to study the matrix $$A=\begin{pmatrix}0 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}$$ which is just the matrix representation of $T$ with respect to the standard basis $\{x^2,\ x,\ 1\}$. The point here is that $A$ is not $T$. It is a representation of $T$ which happens to share many of the same properties. Therefore by studying $A$, we gain valuable insight into the behaviour of $T$. For example, one way of finding eigenvectors for $T$ would be to find the eigenvectors of $A$. The eigenvectors of $A$ then correspond uniquely via isomorphism to the eigenvectors of $T$.

You ask "If I'm given a matrix with entries in $\mathbb{F}$, how exactly would I go about determining information about it from linear maps?", but this question is a little backwards. If we have a matrix, then its information is readily available to us. For example, a huge amount of information can be obtained by simply row reducing the matrix. In general, it is easier to study matrices than to study abstract linear transformations and this is precisely why we represent linear transformations with matrices.

The bottom line is that matrices serve as simpler representatives for linear mappings. Given an arbitrary linear mapping, we can fix basis for the domain and codomain and obtain a corresponding matrix representation for the mapping. Conversely, for a given choice of basis, each matrix can also be interpreted as a general linear map. However, we seldom use the latter fact since it is easier to work with matrices than general linear mappings.

Some of your questions were a little hard to interpret so I hope I have addressed your main concerns here. Please do not hesitate to ask for clarification.

Related Question