Misunderstanding Theorem on Eigenvalues of an Upper Triangular Matrix

eigenvalues-eigenvectorslinear algebra

I was looking at a theorem which says that if a linear operator $T:V\rightarrow V$ has an upper-triangular matrix with respect to some basis of $V$, then the eigenvalues of $T$ are precisely the entries on the diagonal of that upper-triangular matrix.

So I was looking at the linear map whose matrix representation $A$ is
$$ A = \begin{bmatrix} 1 & 2 \\ 2 & 1\end{bmatrix}.$$
I obviously misunderstand the theorem, because I put $A$ into upper triangular form
$$A^{\prime} = \begin{bmatrix} 2 & 1 \\ 0 & -3\end{bmatrix},$$
so the eigenvalues are $2$ and $-3$, but this is clearly false since the eigenvalues are actually $-1$ and $3$.

I'm not sure what exactly I'm misunderstanding about the theorem except the “some basis" part. What am I missing here?
Thanks.

Best Answer

It is precisely the "some basis" part that you seem to have misunderstood. What we're looking for is an upper triangular matrix of $T$ relative to another basis. If $T$ is the transformation whose matrix is $A$, then the matrix of $A$ relative to another basis is given by $SAS^{-1}$ for some invertible matrix $S$.

What you did to go from $A$ to $A'$ is a row operation. That is, for some invertible matrix $S$, you ended up with the product $SA$.