[Math] Geometric and analytic multiplicity of a linear operator

linear algebra

If I understand correctly, the analytic multiplicity of a linear operator say $T:V\to V$ is the amount of times $\lambda$ shows up as a root in the characteristic polynomial (Assuming you have a matrix $A$ representing $T$ with respect to some basis of $V$, then the characteristic equation is $\det(A-\lambda I)$).

I understand how to find this, but what exactly does this mean for the linear operator?

Also, how do we find the geometric multiplicity of $T$? What is it's significance? I tried looking it up on Wikipedia but they are using some notation and words that I am not familiar with; the definitions there are usually quite formal. Am I correct in thinking that it is the greatest amount of linearly independent vectors of the eigenvalues of $T$? Do all the eigenvalues of a linear operator have the same amount of linearly independent vectors?

Edit: Here is a specific example.
We have the matrix, say $A$, with $1$s all along the diagonal and above, and $0$s below. Since this matrix is upper triangular, the eigenvalues of $A$ are $1$. Looking at the equations we have to satisfy to find the eigenvector(s) of an eigenvalue $1$, we get:
\begin{align*}
x_1 + x_2 + \cdots + x_n &= \lambda x_1\\
x_2 + \cdots + x_n &= \lambda x_2\\
&\vdots\\
x_n &=\lambda x_n
\end{align*}

It seems like we need to do some inspection here, it would be hard (or at least I can't see how) to show this formally. It seems to me like there are two possible linear independant eigenvectors here,
$$\displaystyle \vec 0$$ and $$\displaystyle (1,0,…,0)$$
So the geometric multiplicity is $1$?

Best Answer

The geometric multiplicity of $\lambda$ tells you how big a subspace of $V$ you can find where $T$ acts simply as "multiplication by $\lambda$" (that is, how big, dimensionally speaking, the subspace spanned by the eigenvectors of $\lambda$ is).

The analytic/algebraic multiplicity of $\lambda$ tells you how big that space "should" be for $V$ and $T$ to have a "nice" decomposition of the following kind: you can express $V$ as a direct sum of subspaces $E_{\lambda_1}$, $E_{\lambda_2},\ldots,E_{\lambda_k}$ (where $\lambda_1,\ldots,\lambda_k$ are the distinct roots of the characteristic polynomial) so that on each $E_{\lambda_i}$, $T$ acts just by "multiplication by $\lambda_i$". For $V$ to really be equal to the sum of these spaces, you need $\dim(E_{\lambda_i})$, which is the geometric multiplicity of $\lambda_i$, to equal the algebraic multiplicity of $\lambda_i$.

It has other properties, but I think that's a good place to start.

Yes, the geometric multiplicity is the largest possible number of linearly independent eigenvectors of $T$ associated to $\lambda$ (vectors $\mathbf{v}$, $\mathbf{v}\neq\mathbf{0}$, such that $T(\mathbf{v}) = \lambda\mathbf{v}$; that is, vectors on which $T$ acts just by "multiplication by $\lambda$).

No, not all eigenvalues have the same geometric multiplicity; for example, in the matrix $$\left(\begin{array}{cccc} 2 & 1 & 0 & 0\\\ 0 & 2 & 0 & 0\\\ 0 & 0 & 2 & 0\\\ 0 & 0 & 0 & 1 \end{array}\right),$$ the characteristic polynomial is $(2-\lambda)^3(1-\lambda)$, so the two eigenvalues are $\lambda_1=1$ and $\lambda_2=2$. The eigenvalue $\lambda_1=1$ has algebraic and geometric multiplicities both equal to $1$; $\lambda_2=2$ has algebraic multiplicity $3$, and geometric multiplicity $2$. (You can check the geometric multiplicity by finding the nullity of $A-2I$).

Added. Since the geometric multiplicity of an eigenvalue $\lambda_i$ is the dimension of the subspace $E_{\lambda_i}$, your first task in finding that dimension is to identify the vectors $\mathbf{v}$ for which $T(\mathbf{v})=\lambda_i\mathbf{v}$. This is equivalent to finding the vectors for which $(T-\lambda_i I)(\mathbf{v})=\mathbf{0}$. The reason this is a better problem to tackle is that it is easier to solve a system that looks like $B\mathbf{v}=\mathbf{0}$, than one that looks like $A\mathbf{v}=\lambda\mathbf{v}$.

So, you find the nullspace of $T-\lambda_iI$, that is, the collection of all vectors $\mathbf{v}$ for which $(T-\lambda_iI)(\mathbf{v})=\mathbf{0}$. Its dimension is precisely the geometric multiplicity of $\lambda_i$, so the geometric multiplicity of $\lambda_i$ is found by computing $\mathrm{nullity}(T-\lambda_iI)$.

For your specific example: we begin with the matrix corresponding to the standard basis: $$A =\left(\begin{array}{cccc} 1 & 1 & \cdots & 1\\\ 0 & 1 & \cdots & 1\\\ \vdots & \vdots & \ddots & \vdots \\\ 0 & 0 & \cdots & 1 \end{array}\right).$$ The characteristic polynomial is $\det(A-tI) = (1-t)^n$, so the only eigenvalue is $\lambda=1$, with algebraic multiplicity $n$.

To find the geometric multiplicity, take $A-1I$ ("$1I$" because we are taking $\lambda = 1$), and find its nullspace. Since $$A - I = \left(\begin{array}{ccccc} 0 & 1 & 1 &\cdots & 1\\\ 0 & 0 & 1 & \cdots & 1\\\ \vdots & \vdots & \vdots & \ddots & \vdots \\\ 0 & 0 & 0 & \cdots & 0 \end{array}\right),$$ finding the reduced row-echelon form will give you the solutions to $(A-I)\mathbf{x}=\mathbf{0}$. The reduced row-echelon form of $A-I$ is $$\left(\begin{array}{ccccc} 0 & 1 & 0 & \cdots & 0\\\ 0 & 0 & 1 & \cdots & 0\\\ \vdots & \vdots & \vdots & \ddots & \vdots\\\ 0 & 0 & 0 & \cdots & 1\\\ 0 & 0 & 0 & \cdots & 0 \end{array}\right),$$ so $\mathbf{x}=(x_1,x_2,\ldots,x_n)$ is in the nullspace if and only if $x_2=x_3=\cdots=x_n=0$. So the eigenvectors of $\lambda=1$ are all vectors of the form $(a,0,0,\ldots,0)$ for arbitrary $a$; however, because $\mathbf{0}$ is always a solution to $A\mathbf{x}=\lambda \mathbf{x}$ for any $\lambda$, we declare by fiat that an eigenvector has to be nonzero (this has no bearing on the geometric multiplicity of $\lambda$, because $\mathbf{0}$ can never be in a linearly independent set). A basis for this nullspace is given by $(1,0,\ldots,0)$, so the nullspace has dimension $1$. This dimension is the geometric multiplicity of $\lambda=1$.

So, in summary: $\lambda=1$ is the only eigenvalue; it has algebraic multiplicity $n$, and geometric multiplicity $1$. The eigenvectors are all nonzero multiples of $(1,0,0,\ldots,0)$.

Related Question