[Math] Endomorphisms, their matrix representations, and relation to similarity

abstract-algebralinear algebramatrices

This question is really several short general questions to clear up some confusions. We'll start with where I'm at:

An endomorphism $\phi$ is a map from a vector space $V$ to itself. After choosing a basis $\mathcal{B}$ of $V$, one may determine a matrix to represent such an endomorphism with respect to that basis, say $[\phi]_{\mathcal{B}}$.

Question 1) If given a matrix without a basis specified, could you deduce a unique endomorphism it corresponds to? (my lean is no)

Question 2) In a similarity transform, say $A=SDS^{-1}$ where $D$ is diagonal, $S$ is the change of basis matrix from one basis to another. My question is, since $D$ is diagonal does that mean the matrix $D$ is the same endomorphism as $A$ with respect to the standard basis in $\mathbb{R^{n}}$. Or are we unable to determine which bases are involved if only given the matrices.

Question 3) Given a matrix related to an endomorphism, is it possible to determine the basis used to represent the endomorphism. (Lean yes)

Overarching Question) I am trying to understand what happens under similarity transforms. I understand we input a vector, a change of basis is applied to it, the endomorphism is applied with respect to the new basis, and then it is changed back to the old basis, but my confusion relates to the construction of the similarity matrix. If a matrix is diagonalizable, then $S$ turns out to be the eigenvectors arranged in a prescribed order. Why is this! Why do the eigenvectors for the endomorphism $\phi$ with respect to one basis act as a change of basis matrix, and what basis to they go to? This is really part of question 2. Does this send the vectors to the standard basis? Or some other basis that just happens to diagonalize the endomorphism.

Thanks!

Best Answer

1) No. Any choice of basis determines an endomorphism that the matrix corresponds to and in general different choices of basis will give different endomorphisms. (They are, of course, all similar.)

2) No. Consider the case that $A$ is not diagonal. Two matrices have the same entries if and only if they represent the same linear transformation with respect to a fixed basis.

3) No. This is the same question as 1).

4) To see this concretely, write out the condition $AS = SD$ for $D$ a diagonal matrix explicitly. To see this abstractly, let $T$ be a linear transformation with respect to a basis $e_i$, and suppose it has a basis $v_i$ of eigenvectors with eigenvalues $\lambda_i$. Then $T$ is diagonal with respect to the basis $v_i$. If $S$ denotes the linear transformation which sends $v_i$ to $e_i$, then $$STS^{-1}(e_i) = ST v_i = S \lambda_i v_i = \lambda_i S v_i = \lambda_i e_i$$

so $STS^{-1}$ is diagonal with respect to the basis $e_i$. Now, the above is a statement about linear transformations which is independent of basis. Writing everything above in terms of the basis $e_i$ gives you a corresponding statement about matrices, and in that statement $S^{-1}$ is more or less by definition the matrix whose columns are the entries of $v_i$ (with respect to the basis $e_i$).


I have often thought that elementary linear algebra would be less confusing if it were made explicit that when changing bases one is really working with two vector spaces; first the original vector space $V$ one cares about and second the concrete vector space $\mathbb{C}^n$ where $n = \dim V$ with its distinguished basis. A basis for $V$ is then equivalent to the choice of an isomorphism $f : \mathbb{C}^n \to V$ and changing bases corresponds to changing the choice of this map. Crucially, there are two natural ways to do this: either precompose with an automorphism $\mathbb{C}^n \to \mathbb{C}^n$ or postcompose with an automorphism $V \to V$. The two give the same result, but the notion of sameness here is itself dependent on the choice of $f$.

In category theory, one says that the finite-dimensional vector space (over a fixed field) of a given dimension is unique up to isomorphism, but not unique up to unique isomorphism, and so when identifying different vector spaces one must keep track of the identifications one is using or else risk getting hopelessly lost.

In other words, when dealing with objects that are isomorphic but for which the isomorphism is not unique, it is better to behave as if they are different objects even if they are in some sense "the same."


It might help to think of the same linear transformation with respect to two different bases as operating on two different data types. That is, with respect to a given basis $\mathcal{B}$, the corresponding matrix should perhaps be thought of as a function which accepts and spits out "$\mathcal{B}$-type" vectors, and with respect to a different basis $\mathcal{C}$ accepts and spits out $\mathcal{C}$-type vectors. These operations are compatible but to specify the compatibility you need to typecast from $\mathcal{B}$-type vectors to $\mathcal{C}$-type vectors and back again, and this is exactly what the change-of-basis matrix is supposed to do.

The change-of-basis matrix therefore has different input and output types.

Related Question