[Math] Finding a basis of an eigenspace with complex eigenvalues

eigenvalues-eigenvectorslinear algebra

I'm studying Leon's Linear Algebra with Applications on my own, and in section 6.1 he gives the following example:

Given $A = \begin{pmatrix}
1 & 2\\
-2 & 1
\end{pmatrix}$, compute the eigenvalues of $A$ and find bases for the corresponding eigenspaces.

His solution: $\begin{vmatrix}
1-\lambda & 2\\
-2 & 1 – \lambda
\end{vmatrix} = (1-\lambda)^2+4.$

Since $\lambda_1 = 1 + 2i, \lambda_2 = 1 – 2i$,

$A – \lambda_1I = \begin{pmatrix}
-2i & 2\\
-2 & -2i
\end{pmatrix} = -2\begin{pmatrix}
i & -1\\
1 & i
\end{pmatrix}$.

He then concluded that $\left \{ (1,i)^T \right \}$ is a basis for the eigenspace corresponding to $\lambda_1$.


I understand how he found the roots and set up $A – \lambda_1I$, but I don't know how he found the basis for the eigenspace. I know how to do this when the roots are real, but when they are complex I don't get it. If anybody could explain this to me I would appreciate it. Thanks in advance.

Best Answer

You have to solve the linear system $$ 2\begin{pmatrix} i & -1\\ 1 & i \end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix}= \begin{pmatrix}0 \\ 0\end{pmatrix} $$ which becomes $ix_1-x_2=0$. A nonzero solution of this system is thus $$ \begin{pmatrix}1 \\ i\end{pmatrix} $$ It's no different from the “real” case.

As Amzoti observes, the fact that your matrix has real coefficients implies that $\lambda_2=\bar\lambda_1$; so if $v=(1~i)^T$ and $\bar{v}=(1~-i)^T$ is the “conjugate” of $v$, you have $$ \bar{A}\bar{v}=\overline{Av}=\overline{\lambda_1 v}=\bar\lambda_1\bar{v}=\lambda_2\bar{v} $$ so that $\bar{v}$ is an eigenvector for $\lambda_2$. In other words, any time you find an eigenvector for a complex (non real) eigenvalue of a real matrix, you get for free an eigenvector for the conjugate eigenvalue.