How many eigenvalues does an $n\times n$ matrix have, and how does this relate to the algebraic multiplicity of the dominant eigenvalue

eigenvalues-eigenvectorslinear algebramatrices

I'm having to reevaluate my understanding of eigenvalues and how many eigenvalues an $n\times n$ matrix possesses. Previously, I had thought that such a matrix $A$ possessed $d\leq n$ complex eigenvalues, and that this number $d$ was determined by the number of distinct roots of the matrix's characteristic polynomial. Furthermore, the sum of the complex eigenvalues' algebraic multiplicities equals $n$, since the characteristic polynomial of $A$ necessarily has degree $n$ and therefore has $n$ complex roots (where a root of multiplicity $m$ is counted $m$ times). However, I came across a problem involving dominant eigenvalues that prompted me to question this interpretation:

If $A$ has a dominant eigenvalue $\lambda_1$, prove that the eigenspace $E_{\lambda_1}$ is one-dimensional.

Solution: If $\lambda$ is dominant, then $|\lambda|>|\gamma|$ for all other eigenvalues $\gamma$. But this means that the algebraic multiplicity of $\lambda$ is $1$, since it appears only once in the list of eigenvalues listed with multiplicity, so its geometric multiplicity is $1$ and thus its eigenspace is one-dimensional.

So if I'm reading this explanation correctly, a list of all of the eigenvalues of $A$ should include $i$ instances of an eigenvalue with algebraic multiplicity $i$. In other words, every $n \times n$ matrix has exactly $n$ complex eigenvalues, and there is a distinction between the number of eigenvalues that a matrix possesses and the number of distinct eigenvalues that a matrix possesses. This subtle distinction seemed arbitrary until I considered the solution to this problem, which seems to require that all eigenvalues be treated as separate entities, even if they possess the same scalar values. For example, an eigenvalue $\lambda =2$ with algebraic multiplicity $2$ should actually be thought of as two eigenvalues $\lambda _1 = \lambda _2 = 2$. With this understanding, it is clear that neither $\lambda _1$ nor $\lambda _2$ can be a dominant eigenvalue, since it is not true that $|\lambda _1|>|\lambda _2|$, nor is it true that $|\lambda _2|>|\lambda _1|$. In fact, it is impossible for any eigenvalue with algebraic multiplicity greater than $1$ to be dominant. From here, I am comfortable with the fact that a dominant eigenvalue (if it exists) must also have geometric multiplicity $1$, since the geometric multiplicity of an eigenvalue is always less than or equal to the corresponding algebraic multiplicity.

Is this the correct way to interpret the preceding proof/eigenvalues in general? Hopefully I've articulated my thought process clearly, and thank you for taking the time to help!

Best Answer

I don't think a convention is well-established: in some contexts, I see "different eigenvalues" refer to a set of distinct values with associated algebraic multiplicities, while in other contexts, I see "different eigenvalues" refer to the set of $n$ eigenvalues, possibly with repetitions due to multiplicity. Typically one can either discern which convention is being used, or the author should take care to clarify what is meant.

In your case, I think you just have to read the definition of "dominant eigenvalue" carefully. Based on the problem writing "dominant eigenvalue $\lambda_1$," I suspect the definition is written as

if $\lambda_1, \ldots, \lambda_n$ are the eigenvalues of $A$, then $\lambda_1$ is considered dominant if $|\lambda_1| > |\lambda_i|$ for all $i \ne 1$

or something like that, which is unambiguous compared to

$|\lambda| > |\gamma|$ for all other eigenvalues $\gamma$

which is very ambiguous for the reasons you raise.


Now that we know that the context of your question is the power method, then my above guess on what "dominant eigenvalue" means is incorrect.

Let $\lambda_1, \ldots, \lambda_m$ be the distinct eigenvalues of $A$ with multiplicities $n_1, \ldots, n_m$. If $|\lambda_1| > |\lambda_i|$ for all $i \ne 1$, then $\lambda_1$ is said to be the dominant eigenvalue. The power method will converge to something in the eigenspace corresponding to $\lambda_1$. To ensure that it does not converge to zero, the initial vector must not be orthogonal to the eigenspace.