Fastest converging algorithm for computing spectral radius of symmetric matrix

eigenvalues-eigenvectorsmatricesnumerical linear algebraspectral-normspectral-radius

There are quite a few algorithms available to numerically compute eigenvalues of a matrix. Suppose one is not interested in computing the full spectrum of a general matrix, but only the largest eigenvalue in magnitude, and that we have additional knowledge on the matrix at hand namely:

  • it has real and non-positive entries
  • it is symmetric
  • it is definitely not sparse
  • the range of sizes that should be handled is in between 10000 ~ 50000

At first sight, power iteration methods should be relevant for me as they allow for computation of the eigenvalues one by one, in order of decreasing magnitude, so I could just stop after the first one. But it does not really use my assumptions so I figured that it might be possible to do better than that. As it turns out, the Wikipedia entry states the following :

For symmetric matrices, the power iteration method is rarely used, since its convergence speed can be easily increased without sacrificing the small cost per iteration; see, e.g., Lanczos iteration and LOBPCG.

I can also see that the scipy Python package implements the Implicitly Restarted Lanczos Method, for its scipy.sparse.linalg.eigsh function that "Find k eigenvalues and eigenvectors of the real symmetric square matrix or complex hermitian matrix A.".

So it is now my understanding that the power method is not the way to go, but is the Lanczos iteration method the best direction for my problem ? Or is there a way to use my second assumption that the matrix only contains real entries of the same sign to speed up the process ?


Some preliminary experimental results for a small (2000 rows and cols) matrix :

naive full spectrum computation with scipy.linalg.eig
--- 0.7484889030456543 seconds ---
rho: 36.7184965271

full spectrum computation with scipy.sparse.linalg.eigsh:
--- 0.7304418087005615 seconds ---
rho: 36.7184965271

eigsh stopped at k=1:
--- 0.06366991996765137 seconds ---
rho: 36.7184965271

cvxpy for computing spectral norm based on Rodrigo de Azevedo's hint:
--- 0.19952964782714844 seconds ---
rho: 36.718496527064126

Interestingly moving from scipy.linalg.eig to scipy.sparse.linalg.eigsh does not speed up things a lot when computing a full spectrum.
I also compared eigsh to the alternative method proposed by Rodrigo de Azevedo.

Best Answer

When you call the Python routine it actually calls Fortran routines I believe that choose specific sub-routines. If you look at Netlib. It has many routines for symmetric eigenvalues problems which are all Lanczos algorithms. I am not sure about the only positive part. It will take into account the fact your matrix isn't sparse.

However, the issue is with the size of your matrix. A dense 10k x 10k matrix may actually work. A dense 50k x 50k matrix is more than 12 GB.

In terms of the fastest, there are likely libraries if you look them up for low-rank matrix approximation based on randomized algorithms.