[Math] Computing the largest Eigenvalue of a very large sparse matrix

linear algebramatricesnumerical methods

I am trying to compute the asymptotic growth-rate in a specific combinatorial problem depending on a parameter w, using the Transfer-Matrix method. This amounts to computing the largest eigenvalue of the corresponding matrix.

For small values of w, the corresponding matrix is small and I can use the so-called power method – start with some vector, and multiply it by the matrix over and over, and under certain conditions you'll get the eigenvector corresponding to the largest eigenvalue. However, for the values of w I'm interested in, the matrix becomes to large, and so the vector becomes too large – $n>10,000,000,000$ entries or so, so it can't be contained in the computer's memory anymore and I need extra programming tricks or a very powerful computer.

As for the matrix itself, I don't need to store it in memory – I can access it as a black box, i.e. given $i,j$ I can return $A_{ij}$ via a simple computation. Also, the matrix has only 0 and 1 entries, and I believe it to be sparse (i.e. only around $\log n$ of the entries are 1's, $n$ being the number of rows/columns). However, the matrix is not symmetric.

Is there some method more space-effective for computation of eigenvalues for a case like this?

Best Answer

You could use the Arnoldi Iteration algorithm. This algorithm only requires the matrix $A$ for matrix-vector multiplication. I'm expecting that you will be able to black-box the function $v\rightarrow Av$. What you generate is an upper Hessenberg matrix $H$ whose eigenvalues whose can be computed cheaply (by a direct method or Rayleigh quotient iteration) and which approximate the eigenvalues of $A$. Arnoldi Iteration will give the best approximation to the dominant eigenvalue so I suspect you won't have to do many iterations before you have a good estimate.

An excellent introduction to this is: "Numerical Linear Algebra" by Trefethen and Bau. (p250)

The basic algorithm can be found here: http://en.wikipedia.org/wiki/Arnoldi_iteration

Now the only thing that is required to make this a fully functional algorithm is a termination condition. Since you don't seem to need the dominant eigenvalue to a high degree of accuracy I would not worry and just stop when the dominant eigenvalue estimate doesn't change too much.

If you have Matlab you can always use the built in function eigs(Afun,n,...) where Afun is the black-box function handle that computes $Av$.

Good luck!

Related Question