Eigenbasis leads to diagonal matrix for linear transformation

eigenvalues-eigenvectorslinear algebralinear-transformations

I am following Gilbert Strang's Linear Algebra course (18.06, link). In lecture 30 on "Linear transformations and their matrices", he mentions that the eigenvector basis leads to a diagonal transformation matrix $\Lambda$, using the example of a projection matrix.

I am not able to understand this statement. To determine the transformation matrix $A$, we need to determine first what the input and output bases are. But if we have already chosen the bases, then how can we choose the bases that are the eigenvectors of $A$? I am thinking of it like the chicken and the egg problem.

Suppose I want a diagonal transformation matrix $\Lambda$. For determining $\Lambda$, we need the bases, but for determining the bases, we need the matrix $\Lambda$ because we have to compute its eigenvectors.

Can someone please help me to understand what I am not understanding correctly here?

Best Answer

Purely from the definitions, you don’t have a problem here: You have some linear transformation $\phi : V \to V$ on some vector space. If you can choose a basis $v_1, \dots, v_n$ of $V$ such that every $v_i$ is an eigenvector of $\phi$, the transformation matrix will be a diagonal matrix.

Now, you’re right that often, we like to compute a transformation matrix if we want to do computations with the linear transformation and we don’t know the eigenbasis yet. But that is not a problem: Choose any basis $w_1, \dots, w_n$ first and represent $\phi$ in that basis (use the same basis for the domain and codomain!), giving a matrix $A$. Then do your computations with $A$. The algorithm for diagonalizing the matrix will give you a change-of-basis matrix $T$ in addition to the diagonal matrix $\Lambda$. This change of basis matrix then allows you to compute a basis $v_1, \dots, v_n$ such that the transformation matrix of $\phi$ with respect to $v_1, \dots, v_n$ is the diagonal matrix $\Lambda$.

Related Question