Why We need Left-Multiplication Matrix Separately

linear algebralinear-transformations

In Friedberg, Ingel, and Spence Linear Algebra (4th Edition), defined left-multiplication transformation as the following:

Let $A$ be an $m × n$ matrix with entries from a field F. We denote by
$L_A$ the mapping $L_A: F^n \rightarrow F^m $ defined by $L_A(x) = Ax$
(the matrix product of $A$ and $x$) for each column vector $x$
$\epsilon$ $F^n$. We call $L_A$ a $\mathbf{\text{left-multiplication transformation}}$.

The answers [here][1] and [here][2] tries to explain left-multiplication transformation.

My question is:

i) If someone can explain this definition in more layman terms? For some reason this definition is not very clear to me and the 2 links also didn't help much.

ii) Why we need a new definition of left-transformation matrix for $\mathbb{R}$? We already proved "all" linear transformation can be associated with a matrix, say $A$, if ordered basis can be defined. So, I can't find the utility of new definition.

iii) From the definition it seems linear map is "contingent" on $A$ ($A$ is defined independent of linear map), what's guarantee that dimension of matrix associated with linear map, $[T]_\beta^\gamma$, defined on $\mathbb{R}$ and dimension of $A$ will be same?

Thanks!
[1]: What exactly is a left-multiplication transformation?
[2]: Linear Transformations and Left-multiplication Matrix

Best Answer

Have a look at this old answer of mine, the diagram in particular. This should hopefully be something familiar to you.

The idea is that we wish to describe transformations from abstract $n$-dimensional space $V$, to abstract $m$-dimensional space $W$, in more familiar, computable terms. If we fix a basis $\beta$ for $V$, this gives us an isomorphism between $V$ and the space $F^n$, which takes abstract vectors $v \in V$, and transforms it into the column vector $[v]_\beta \in F^n$. This turns the mysterious, abstract, possibly difficult to work with space $V$ into a familiar space of column vectors. Addition in $V$ corresponds to adding these column vectors, and similarly for scalar multiplication. We can completely understand $V$, by looking at only coordinate vectors instead.

Similarly, fixing a basis $\gamma$ for $W$ similarly gives us an isomorphism $w \mapsto [w]_\gamma$ from $W$ to $F^m$. In much the same way, we can understand the abstract vector space $W$ concretely in terms of column vectors.

This also means that linear transformations from $V$ to $W$, which again can be quite abstract, can be concretely understood as linear transformations between $F^n$ and $F^m$ (once bases $\beta$ and $\gamma$ are fixed).

The nice thing is that linear transformations between $F^n$ and $F^m$ can be expressed as multiplication by unique $m \times n$ matrices. This is what this definition is trying to establish. This step is important: we need not only to establish a correspondence between linear maps $T : V \to W$ and linear maps $S : F^n \to F^m$, but between linear maps $T : V \to W$ and $m \times n$ matrices. Both connections are important to establish this.

There needs to be two directions to this: we need to show that a linear map from $F^n$ to $F^m$ can be expressed as multiplication by an $m \times n$ matrix, and that multiplication by an $m \times n$ matrix is always a linear map from $F^n$ to $F^m$. The latter is what is about to be established. Without showing that $L_A : F^n \to F^m$ is linear, then all we know is that linear maps between $V$ and $W$ correspond to some $m \times n$ matrices. What if certain $m \times n$ matrices turn out to be out-of-bounds?

They're not. As it turns out, $L_A$ is linear, just by standard distributivity and associativity properties of matrices, e.g. $$L_A(x + y) = A(x + y) = Ax + Ay = L_A(x) + L_A(y).$$ This and the scalar homogeneity argument imply that $L_A$ is always a linear map.

Here is an example to show you how this definition works. Suppose we pick arbitrarily a matrix like $$A = \begin{pmatrix} 1 & -1 \\ 0 & 0 \\ 2 & -2\end{pmatrix}.$$ Then, $A$ is $3 \times 2$, and so $L_A$ should be a linear map from $\Bbb{R}^2$ to $\Bbb{R}^3$. By definition, $$L_A\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 1 & -1 \\ 0 & 0 \\ 2 & -2\end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x - y \\ 0 \\ 2x - 2y\end{pmatrix}.$$ Hopefully you can see that this is a linear transformation, and if you were to take the standard matrix for this linear transformtion, you would simply get $A$. You can do this with any $A$, helping prove that matrix multiplication is equivalent to general linear transformations between finite-dimensional spaces.