Matrices – Is Matrix Representation of Complex Numbers Just a Trick?

complex numberslinear-transformationsmatrices

This question is a follow up from my stack question even though the question does not depend on know what the question was, I think it adds to the context of where I'm coming from.

First of as I had stated in the link, I was originally confused about how we have two separate ways of representing a complex number in a matrix form. And the conclusion I came up was that the two definitions technically should not be used in a same system or equation to represent a complex number. I may be wrong to say this and if I am I would appreciate any argument to this idea.

Please skip to conclusion if you want to read less

Kind of demonstrating the problem of having two definitions

$2 \times 2$ matrix form definition $a+bi=\begin{bmatrix}a&-b\\ b&a\end{bmatrix}$

and

$1 \times 2$ matrix (vector form)definition $a+bi=\begin{bmatrix}a\\ b\end{bmatrix}$

The two are obviously not the same, how could they represent the same thing?I did a little investigation using the example of $i \times i = -1$

case 1
$\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}\cdot \begin{bmatrix}0&-1\\ 1&0\end{bmatrix}=\begin{bmatrix}-1&0\\ 0&-1\end{bmatrix}$

I did a linear transformation on a matrix where output was also a matrix that can represent both a linear transformation and a complex number.
it is also interesting to note that the commutative property of (multiplying/linear transformation by) complex number is conserved as it should do.

case 2
$\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}\cdot \begin{bmatrix}0\\1\end{bmatrix}=\begin{bmatrix}-1\\0\end{bmatrix}$

This time I did a linear transformation on a vector and the output was also a vector that represent a complex number.
This method works since the "rotational property" of the complex number is encoded in the linear transformation

case 3
$\begin{bmatrix}0\\1\end{bmatrix}\cdot \begin{bmatrix}0\\1\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}$?
This linear transformation is nonsense. I think this is because the "rotational property" is encoded in the basis vector of the vector [1,i]. So to encode this property it makes sense to add the basis vectors.
$\begin{bmatrix}0\\1\end{bmatrix}\cdot \begin{bmatrix}0\\1\end{bmatrix}=i\times i=-1$.

case 4
$\begin{bmatrix}0\\1 \end{bmatrix} \cdot \begin{bmatrix} 0&-1 \\ 1&0 \end{bmatrix} = \begin{bmatrix} 0&0 \\ 0&-1 \end{bmatrix}$?

This is also a nonsense linear transformation.

Interestingly you can do a hack and turn the matrix into a vector.
$\begin{bmatrix}0\\1 \end{bmatrix} \cdot \begin{bmatrix} 0&-1 \\ 1&0 \end{bmatrix}\cdot\begin{bmatrix}1\\0 \end{bmatrix} = \begin{bmatrix} 0&0 \\ 0&-1 \end{bmatrix}$
$=\begin{bmatrix}0\\1 \end{bmatrix}\cdot \begin{bmatrix}0\\1 \end{bmatrix}$
$=i\times i = -1$
But this is kind of cheating since we are just converting a matrix into a vector.

Conclusion
Matrix representation encodes the idea of rotation and also conserves addivinity and commutativity. By definition, basis vector can be anything for this matrix.
Vector representation does not encode the idea of rotation it self but it's basis vector does (it's basis vector must be imaginary and real).

Derivation of matrix representation require the assumption of $i=\begin{bmatrix}0\\1\end{bmatrix}$ and $1=\begin{bmatrix}1\\0\end{bmatrix}$.
Because of this I don't think the matrix representation should be defined as a complex representation but is a tool that encode the rotational and additivity property. Therefore it can be used to rotate (multiply) vector representation. Therefore
$ i \cdot i $ and $\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}\cdot \begin{bmatrix}0\\1\end{bmatrix}$

might be an equivalent operation. I don't think they should be said to be the same. It is nothing but a trick that yield the same result.
It is obvious that matrix representation and vector representation should not be defined to be i in the same system when you do a simple addition
So if $ i \cdot i \equiv \begin{bmatrix}0&-1\\ 1&0\end{bmatrix}\cdot \begin{bmatrix}0\\1\end{bmatrix}$ then surely
$ i + i \equiv \begin{bmatrix}0&-1\\ 1&0\end{bmatrix} + \begin{bmatrix}0\\1\end{bmatrix}$ Which is not true
If the derivation of matrix form require the definition of vector form, Surely matrix form can not define it self as a same thing. Also, does it really make sense to define a complex number as a linear transformation? For these reasons the matrix definition just seems like a trick to me and feel that it should technically be defined as.
$a+bi = \begin{bmatrix}a&-b\\ b&a\end{bmatrix} \cdot \begin{bmatrix}1\\ 0\end{bmatrix}$

And the linear transformation trick should be defined explicitly as a trick rather than a complex number it self
$(a+bi)\times (c+di) = \begin{bmatrix}a&-b\\ b&a\end{bmatrix} \cdot \begin{bmatrix}c\\d\end{bmatrix}$
Like wise the additivity trick should be treated as a trick rather than treating a matrix as a complex number it self.

Best Answer

It is not a trick.

Fix $z=a+bi \in \mathbb C$ and consider the map $\mu : w \mapsto zw$.

Seeing $\mathbb C$ as a vector space over $\mathbb R$, the matrix of $\mu$ with respect to the basis $1,i$ is exactly $$\begin{bmatrix}a&-b\\ b&a\end{bmatrix}$$

The map $z \mapsto \mu$ is an injective homomorphism of $\mathbb R$-algebras $\mathbb C \to \text{End}_\mathbb R(\mathbb C) \cong M_2(\mathbb R)$.

The same construction works for every finite extension of fields $E/F$: the matrix ring $M_n(F)$ contains copies of all extensions of $F$ of degree $n$.

In particular, for instance, $\mathbb Q(\sqrt 2)$ can be given a matrix interpretation in $M_2(\mathbb Q)$. Try it!