Why can matrices be represented as column vectors in an abstract vector space

linear algebra

Recently, I solved this question:

"Let $V = \mathbb{R}^{2\cdot 2}$ with basis
$\mathbf{B}$ = $(\mathbf{b_1}, \mathbf{b_2}, \mathbf{b_3}, \mathbf{b_4}, )$, where

$\mathbf{b1} = \begin{pmatrix}
1 & 0\\
0 & 0
\end{pmatrix}$

$\mathbf{b2} = \begin{pmatrix}
0 & 0\\
1 & 0
\end{pmatrix}$

$\mathbf{b3} = \begin{pmatrix}
0 & 1\\
0 & 0
\end{pmatrix}$

$\mathbf{b4} = \begin{pmatrix}
0 & 0\\
0 & 1
\end{pmatrix}$

Thus, n = dim(V) = 4. The map f: $V \mapsto V$ given by

$f(A) = A^{T}C$ where $C = \begin{bmatrix}
1 &3 \\
2& 4
\end{bmatrix}$

is linear. Find $M_{\mathbf{B} \leftarrow \mathbf{B}}$"

$M_{\mathbf{B} \leftarrow \mathbf{B}} =
\begin{pmatrix}
1 & 2 & 0 & 0\\
0 & 0 & 1 & 2\\
3 &4 & 0 & 0\\
0 & 0 & 3 & 4
\end{pmatrix}$

Though I know how to solve these types of questions, I don't understand why matrices can be represened as column vectors in abstract vector spaces. I understand that in this instance, they have to be presented as column vectors, since…

$\begin{pmatrix}
1 & 2 & 0 & 0\\
0 & 0 & 1 & 2\\
3 &4 & 0 & 0\\
0 & 0 & 3 & 4
\end{pmatrix}$$\begin{pmatrix}
1 & 0\\
0 & 0
\end{pmatrix}$

… is something you can't calculate. Yet…

$\begin{pmatrix}
1 & 2 & 0 & 0\\
0 & 0 & 1 & 2\\
3 &4 & 0 & 0\\
0 & 0 & 3 & 4
\end{pmatrix}$$\begin{pmatrix}
1 \\
0 \\
0 \\
0 \\
\end{pmatrix}$

…is.

I've read this post, yet I'm not quite satisfied with the answer. I understand that matrices satisfy the axioms for an abstract vector space, yet the conversion from matrix…

$\begin{pmatrix}
a & b\\
c & d
\end{pmatrix}$

…to column vector…

$\begin{pmatrix}
a \\
c \\
b \\
d \\
\end{pmatrix}$

…instinctively feels off.

Could someone help me out on this? It would be greatly appreciated.

Best Answer

You have a tiny error in the order of the basis elements contra the order of the entire in the column vector. That makes things look a bit weird in this answer, but hopefully it's not too big of an issue.

The matrix $$v=\begin{pmatrix} a & b\\ c & d \end{pmatrix}$$ is just a pure, straight-forward element of $V$, a $2\times 2$ matrix. Nothing really going on here.

Now we introduce the basis $\mathbf B$. With this basis, we can write $$ v=a\mathbf b_1 + c \mathbf b_2 + b\mathbf b_3 + d \mathbf b_4 $$ See that we have now hidden any reference to the actual form of $v$. It has been abstracted into the $\mathbf b_i$s. No longer does our notation explicitly care that $v$ is a matrix. It is an element of a vector space with a basis, and that's it.

Also note that if we remember that we are using the basis $\mathbf B$, we don't really need to write the $\mathbf b_i$. We can just write down the coefficients $a,c,b,d$ in order. And that's how we make the column vector $$ v=\begin{pmatrix} a \\c\\ b\\ d \end{pmatrix} $$ The fact that $v$ originally is a matrix is entirely forgotten in this representation. That information is baked into the basis vectors, which are now out of sight. This column is only a record of the coefficients, and nothing more. The fact that these are the same entries as in the original representation of $v$ is a coincidental by-product of $\mathbf B$ being nice, and nothing else. This is the representation that is used when we calculate things like $Mv$.

The easiest way to answer this problem in my opinion is to calculate $f(\mathbf b_i)$ for each $i$ using the basic $2\times 2$ representation, convert each result into column representation the way described above, and those would be the columns of $M$.