Can I get the eigenvalues and the eigenvectors of a linear transformation given its matrix representation with two different basis

change-of-basiseigenvalues-eigenvectorslinear algebralinear-transformations

Given a Linear Transformation $T:P_2(R) \rightarrow P_2(R)$, a change-of-basis matrix $[T]_{B,C}$ and the two different basis B and C.
Can I get the eigenvalues and the eigenvectors of that Linear Transformation just with that change-of-basis matrix? Or should I get the matrix representation of the Linear Transformation with the same basis?, for instance $[T]_{B, B}$ or $[T]_{C, C}$

Best Answer

You get no information whatsoever about non-zero eigenvalues/eigenvectors from $[T]_{B,C}$ unless you know $B$ and $C$. Of course, knowing the matrix for an operator with respect to known bases, does allow you to reconstruct the operator, and hence information such as eigenvectors/eigenvalues. But, if you just have a matrix, with respect to two unknown bases, you have basically no information.

You do get a bit of information about eigenvalues/eigenvectors of $0$. Recall that the eigenvectors corresponding to $0$ are the kernel of $T$. The nullspace of the matrix $[T]_{B, C}$ is always the image of the kernel of $T$ mapped under the coordinate vector map with respect to $B$. That is, $$p \in \operatorname{ker} T \iff [p]_B \in \operatorname{null} [T]_{B, C}.$$ As the coordinate map is injective, this makes the two spaces isomorphic, and hence of the same dimension.

This map therefore takes eigenvectors of $T$ to eigenvectors of $[T]_{B, C}$, each corresponding to $0$. Note that this is the usual correspondence we get when considering eigenvectors of $T$ and eigenvectors of $[T]_{B, B}$ (for more general eigenvalues). In other words, the situation is largely unchanged for the eigenvalue $0$.

However, it's also worth noting that generalised eigenvectors corresponding to $0$ are free game. While the geometric multiplicity (the dimension of the eigenspace) is fixed, the algebraic multiplicity (the dimension of the generalised eigenspace, a.k.a. the exponent of the factor $\lambda$ in the characteristic polynomial) can definitely change. These extra dimensions can become new non-zero eigenvalues, be absorbed by other non-zero eigenvalues, or still contribute to the $0$ eigenvalue (possibly changing the structure of the Jordan Blocks corresponding to $0$).

I've given no examples of this, but here's something that may help. Pick your favourite invertible map $T : V \to V$, where $V$ is finite-dimensional. Pick your favourite basis $B = (b_1, b_2, \ldots, b_n)$ of $V$. Then $C = (Tb_1, Tb_2, \ldots, Tb_n)$ is also a basis! Further, using these bases, it's straightforward to show that $$[T]_{B, C} = I_{n \times n},$$ i.e. the identity matrix. So, any invertible map, with any array of eigenvalues and eigenvectors, can become totally homogenised to the point of being the identity map. Any subtleties about the structure of the eigenspaces (e.g. diagonalisability) is totally gone, and now the whole space is one big, undifferentiated eigenspace corresponding to the single eigenvalue $1$.

In other words, if you care about eigenvalues/eigenvectors, definitely consider $[T]_{B, B}$ or $[T]_{C, C}$. I hope that helps!