Imagine that I'm writing the Jordan form of a matrix and I know that the eigenvalue needs to appear 4 times in the diagonal (algebraic multiplicity is 4) and we need 2 Jordan blocks (geometric multiplicity is 2). Now how do I know the size of the blocks? It could be a 1×1 and 3×3 block ou two 2×2 blocks, right?
[Math] Size of Jordan block
jordan-normal-formlinear algebra
Related Solutions
This is how I think about it: suppose I have a matrix consisting of just one Jordan block of size $k$ and eigenvalue $\lambda$. Then what is the minimal polynomial? An easy computation shows that it is $(x-\lambda)^k$. Now suppose I have a matrix which consists of any number of Jordan blocks all with eigenvalue $\lambda$ and the max size of any of the Jordan blocks is $k$. Then what is the minimal polynomial? Again it is $(x-\lambda)^k$ (since the matrix is block diagonal).
If I have an $n$ by $n$ matrix which consists of any number of Jordan blocks all with eigenvalue $\lambda$, then the characteristic polynomial is of course $(x-\lambda)^n$.
Note that it is not true that the minimal polynomial and characteristic polynomial completely determine the Jordan block structure. Suppose I have a 4 by 4 matrix $A$ with 2 Jordan blocks of size 2 and eigenvalue $\lambda$, and a 4 by 4 matrix $B$ with 3 Jordan blocks, one of size 2, and two of size 1 with eigenvalue $\lambda$. Then for both $A$ and $B$ the minimal polynomial will be $(x-\lambda)^2$ and characteristic polynomial will be $(x-\lambda)^4$.
So, if we know that the multiplicity of $\lambda$ in the characteristic polynomial is 5 and in the minimal polynomial it is 3, then all we know is that the largest Jordan block for $\lambda$ is size 3, and the sum of the sizes of all Jordan blocks is 5. One quickly sees that the only options are:
- One Jordan block of size 3 and one of size 2. (5 = 3+2)
- One Jordan block of size 3 and two of size 1. (5 = 3+1+1)
Let $S_1 = \{\lambda_1,\dots,\lambda_k\},S_2 = \{\lambda_{k+1},\dots,\lambda_m\}$ be disjoint sets comprising the (distinct!) eigenvalues of $A$ such that $\lambda \in S_1 \implies \bar \lambda \in S_1$. Let $p(x) = (x-\lambda_1)\dots(x-\lambda_k)$ and $q(x) = (x-\lambda_{k+1})\dots(x-\lambda_m)$. Because of the way that $S_1,S_2$ were defined, both $p,q$ are polynomials with real coefficients.
As a consequence of the Cayley-Hamilton theorem, it holds that $[p(A)]^n[q(A)]^n = 0$. Note that $U_1 = \ker(p(A)),U_2 = \ker(q(A))$ are invariant subspaces of $\Bbb R^n$.
Claim: $\Bbb R^n = U_1 \oplus U_2$.
Proof of claim: Note that because $p,q$ are relatively prime, there exist polynomials $f,g$ such that $f(x)p(x) + g(x)q(x) = 1$, from which it follows that $f(A)p(A) + g(A)q(A) = I$.
To see that $U_1,U_2$ are disjoint (i.e. have intersection $\{0\}$), note that if $v \in U_1 \cap U_2$, it follows that $$ v = Iv = [f(A)p(A) + g(A)q(A)]v = f(A)[p(A)v] + g(A)[q(A)v] = 0. $$ To see that $U_1 + U_2 = \Bbb R^n$, note that any $v$ can be decomposed into $$ v = q(A)g(A)v + p(A)f(A)v. $$ Because $p(A)q(A) = q(A)p(A) = 0$, it is easy to see that $q(A)g(A)v \in U_1$ and $p(A)f(A)v \in U_2$. $\square$
Now, if $v_1,\dots,v_d$ is a basis of $U_1$ and $v_{d+1},\dots,v_n$ is a basis of $U_2$, it follows that the matrix of $A$ relative to the basis $\{v_1,\dots,v_n\}$ has the desired block-diagonal form.
Best Answer
Yes, the two options you give are the only ones. To decide between the two you can consider $(A-\lambda I)^2$ where $A$ is the matrix and $\lambda$ the eigenvalue.
If the dimension of the kernel is $3$ then you are in the $(1,3)$ situation, if it is $4$ then you are in the $(2,2)$ situation.
More generally, the increase in dimension between the kernel of $(A-\lambda I)^{r-1}$ and $(A-\lambda I)^{r}$ is the number of blocks of size $r$ or more.