[Math] Computing the Frobenius normal form

abstract-algebralinear algebramatrices

I was wondering whether someone could give me an example how one actually determines the Frobenius normal form of a given matrix. Further, it seems hard to find an example where the new basis is calculated so that a given matrix is in Frobenius normal form. I really tried to find an example where this is done, but most books just use this form as a theoretical example rather than actually calculating this form?

A short summary how you would proceed to calculate the Frobenius normal form and the basis would be more than enough, too.

Best Answer

If this is often treated as mostly a theoretical construction, this is probably because there is no really easy procedure for finding the rational canonical form, even though its existence is of fundamental importance.

However here is a basic algorithmic way to compute it. Write down the matrix $XI-A$, a matrix with polynomial entries, the one whose determinant defines the characteristic polynomial. Now apply the Smith normal form algorithm over the principal ideal domain $K[X]$ of polynomials. This will produce a diagonal matrix with on the main diagonal (monic) polynomials$~P_i$, each of which divides the next, the last of which is the minimal polynomial, and the product of which is the characteristic polynomial. The list will usually start with many occurrences of the constant polynomial$~1$, which can be ignored; for every remaining polynomial write down its companion matrix, and string these together in order to get the block diagonal matrix which is the Frobenius normal form of$~A$.

A more "linear algebraic" way to find the Frobenius normal form is to first find the minimal polynomial of $A$. If it has degree$~n$ then it is also the characteristic polynomial and the Frobenius normal form will just be its companion matrix. If not then the companion matrix of the minimal polynomial will just be the final block in the Frobenius normal form; to find the other blocks, find a vector$~v$ not annihilated by any polynomial in$~A$ of degree less than the minimal polynomial (which is certain to exist, in fact most vectors have this property), take the ($A$-stable) subspace $W$ of$~V$ generated by$~v$ and its repeated images by$~A$, and continue recursively with the linear operator that $A$ induces in the quotient space$~V/W$ (there also exists a $A$-stable complementary subspace to $W$, but I think finding one is unnecessary, and working with the quotient is easier).

Related Question