Using the singular value decomposition for calculating eigenvalues and eigenvectors of symmetric matrices

eigenvalues-eigenvectorslinear algebrasvd

I know that for a symmetric matrix, the singular values are equal to the absolute values of the eigenvalues. If the matrix also is positive semi-definite, the eigendecomposition and the singular value decomposition are identical. My question concerns symmetric matrices that are not positive semi-definite, i.e. that have at least one negative eigenvalue.

I have experimented with randomly generated symmetric matrices, and found that for a positive eigenvalue, the eigenvector, after choosing sign appropriately, is identical to both the left and right singular vectors. For a negative eigenvalue, the eigenvector is equal to either the left or right singular vector, and equal to the remaining singular vector multiplied by -1. I have not been able to come up with a counter-example.

If this observation holds true for all symmetric matrices, an eigenvalue decomposition can easily be derived from a singular vector decomposition for such matrices, by switching signs of the singular value and one of the singular vectors when the singular vectors differ in signs.

I initially assumed that this was a well known fact. However, I have not found it stated clearly in the relevant Wikipedia articles or in other web sites. Moreover, I haven't seen the SVD listed as an eigenvalue algorithm anywhere, while other algorithms that are limited to symmetric matrices are.

I give an example below, generated by a program that I have written, that uses the cyclic Jacobi method from the GNU Scientific library for calculating eigenvalues and eigenvectors, and a function from mymathlib.com for calculating the SVD. Eigenvalues and singular values were sorted by descending absolute value. Signs were chosen such that the first component of the eigenvectors and left singular vectors were positive.

My questions are, is my suggested algorithm for calculating eigenvalues and eigenvectors valid for all symmetric matrices?
If so, is there any reason to prefer other methods, such as the cyclic Jacobi method, over the SVD, for such calculations?

Matrix
------
69 47 -1 512
47 1 32 43
-1 32 27 40
512 43 40 88

Eigenvalues
-----------
599.067
-435.442
43.6227
-22.2481

Eigenvectors
------------
0.694513 0.711505 0.105479 0.0169263
0.10848 -0.0122921 -0.492832 -0.863248
0.0544405 0.0629158 -0.862857 0.498554
0.709169 -0.699751 0.0383271 0.0772006

Singular values
---------------
599.067
435.442
43.6227
22.2481

Left singular vectors
---------------------
0.694513 0.711505 0.105479 0.0169263
0.10848 -0.0122921 -0.492832 -0.863248
0.0544405 0.0629158 -0.862857 0.498554
0.709169 -0.699751 0.0383271 0.0772006

Right singular vectors
----------------------
0.694513 -0.711505 0.105479 -0.0169263
0.10848 0.0122921 -0.492832 0.863248
0.0544405 -0.0629158 -0.862857 -0.498554
0.709169 0.699751 0.0383271 -0.0772006

Update

Qiaochu Yuan answered that for a symmetric matrix with distinct singular values, my observation was correct, i.e. that eigenvalues and eigenvectors could be deduced from the singular value decomposition. However, in the case of a symmetric matrix with singular values that are repeated, an SVD algorithm would not be guaranteed to generate a correct solution, because the svd might have multiplicities that correspond to eigenvalues that are distinct (different in sign).

It took me some time to fully comprehend the answer (first needing to learn that eigenvectors — and singular vectors — are not unique in matrices that have repeated eigenvalues). I have then done numerical experiments, and found that counterexamples to my observation can easily be generated as QDQT, where Q is an orthogonal matrix generated by QR-decomposition of a random matrix, and D is an appropriate diagonal matrix with a pair of entries that differ only in sign.

Best Answer

When all the singular values are distinct (which is true for random matrices with high probability) this is true and easy to prove. Write $A = UDV^T$. Then $A^T = VDU^T = A = UDV^T$. In the case of distinct singular values the singular vectors are unique up to scale; it follows that if a matrix $A$ is symmetric then its left and right singular vectors are $\pm 1$ times each other. In the $+1$ case they are eigenvectors with eigenvalue the singular value, and in the $-1$ case they are eigenvectors with eigenvalue the negative of the singular value.

When the singular values aren't distinct there's more ambiguity in how to choose an SVD. What's true in this case is that it's always possible to find an SVD where the singular vectors are eigenvectors, but there's no reason an SVD algorithm is guaranteed to spit out such an SVD. The issue is that the singular values can have higher multiplicities than the eigenvalues, e.g. the eigenvalues could be $\pm 1$ and then the singular values are all $1$.