Symmetric Real Matrices – Smooth Diagonalization

ca.classical-analysis-and-odeseigenvectormatricessp.spectral-theoryvector-bundles

This question is related to another question, but it is definitely not the same.

Is it always possible to diagonalize (at least locally around each point) a family of symmetric real matrices $A(t)$ which depend smoothly on a real parameter $t$? The diagonalization is supposed to be done by similarity transformations with a family of invertible matrices $S(t)$ depending smoothly on $t$.

An equivalent formulation is whether, given a smooth vector bundle $E\to \mathbb R$ over the one-dimensional differentiable manifold $\mathbb R$, and a smooth symmetric bilinear form $b\in E^*\otimes_M E^*$ on the vector bundle, we can always find a local frame of smooth sections of $E$ in which the symmetric bilinear form $b$ is diagonal.

Some results about such diagonalizations are known, for example from this article. There it is proven that we can smoothly choose the eigenvalues of a family of hermitian matrices depending smoothly on a real parameter, given that there are no points where the roots of the characteristic polynomial meet of infinite order. In particular, this works for the analytic case. But because the eigenvectors are computed with respect to the natural basis, it is implicitly assumed that they are orthogonal with respect to this basis.

My need is to find a smooth diagonalization by similarity transforms, which therefore are not necessarily orthogonal in respect to the natural basis, only invertible. Therefore, the restriction is weaker than that of having smooth eigenvectors.

Is it always possible to find such a diagonalization? If not, what are the conditions under which it can be done? Can you provide some references?

Thank you.

Update with the conclusions:

The counterexample of the type given by Michael Renardy and Denis Serre (who also explains it) answers my question (negatively). Initially I though of such examples as being valid only for the problem of eigenvectors, being of the type in the article to which I referred in my question, and I hoped that allowing the transformations to be more general than those orthogonal may avoid this problem.

But I understand now that the two problems are in fact equivalent. I think that the essence of the counterexample is to have the basis made of two rotating vectors on which the quadratic form corresponding to $b(t)$ has opposite signs. In this case, if by absurd we can diagonalize the matrices, even if $S(t)$ are not orthogonal, then in the new basis it is like solving the eigenvalue problem, and this possibility is contradicted by the counterexamples. For the positive definite case, Johannes Ebert pointed that the diagonalization is possible.

Thank you all for your help.

Best Answer

A counterexample is given in Section II.5.3, p. 111 of T. Kato, Perturbation Theory for Linear Operators, 2nd ed.

Related Question