[Math] Infinite matrix leading eigenvector problem

eigenvalueshypergeometric functionslinear algebraq-analogs

This question is cross-posted at Math.StackExchange.com.

I'm trying to find the leading eigenvalue and corresponding left and right eigenvectors of the following infinite matrix, for $\lambda>0$:

$$
\mathrm{A}=\left(
\begin{array}{cccccc}
1 &e^{-\lambda} & 0 &0 &0 & \dots\\
1 &e^{-\lambda} & e^{-2\lambda} &0 &0 & \dots\\
1 &e^{-\lambda} & e^{-2\lambda} &e^{-3\lambda} &0 & \dots\\
\vdots & \vdots & \vdots & & \ddots
\end{array}
\right)
$$

Note that there are terms above the main diagonal.

I know that in general infinite matrices aren't really a self-consistent idea. However, this problem arises from the infinite-$n$ limit of $n\times n$ matrices with the same values. That is, if $A^{(n)}$ is an $n\times n$ matrix such that $A^{(n)}_{ij}=A_{ij}$ then I'm looking for $\lim_{n\to\infty} \eta^{(n)}$, $\lim_{n\to\infty} u^{(n)}$ and $\lim_{n\to\infty} v^{(n)}$, where $\eta^{(n)}$, $u^{(n)}$ and $v^{(n)}$ are the leading eigenvalue of $A^{(n)}$ and its corresponding left and right eigenvectors.

I hope that the above leads to a consistent definition. I don't know much about the theory of operators on sequence spaces (which I gather is what's required to think about infinite matrix type problems correctly – pointers about how to apply it to my problem would be appreciated), but it seems to me that defining it as the limit of a sequence of finite problems should at least lead to something well-defined.

One thing that might be important is that this arises in a context in where $p_i = u_iv_i$ forms a probability distribution. So it's the per-element product of the left and right eigenvectors that has to be normalisable according to the $L_1$ norm. (This is why I'm not sure which sequence space I should be asking about.)

Of course these limits might not converge, but from investigating the $n\times n$ case numerically using power iteration, it looks like they do. The convergence is slower for smaller values of $\lambda$, and rounding errors become a problem when it gets too small, but it looks like it probably converges for all $\lambda>0$.

Note that I only care about the leading eigenvalue, i.e. the one with the largest magnitude, which should be real and positive. Its corresponding eigenvectors should have only positive entries, due to the Perron-Frobenius theorem.

Alternatively, if it's easier, a solution for the following matrix will be just as useful to me:
$$
\mathrm{B}=\left(
\begin{array}{cccccc}
1 & 1& 0 &0 &0 & \dots\\
e^{-\lambda} &e^{-\lambda} & e^{-\lambda} &0 &0 & \dots\\
e^{-2\lambda} & e^{-2\lambda} &e^{-2\lambda} &e^{-2\lambda} &0 & \dots\\
\vdots & \vdots & \vdots & & \ddots
\end{array}
\right)
$$

Again note the terms above the diagonal. (The two problems are not equivalent, it's just that either one of them will help me solve a larger problem.)

The problem is, I just don't have much of an idea how to do this. I've tried a variety of naive methods, along the lines of writing the eigenvalue equation $\mathrm{A}\mathbf{x} = \eta \mathbf{x}$ as an system of equations and then trying to find $\{x_i >0\}$ and $\eta>0$ to satisfy them, but this doesn't seem to lead anywhere nice.

It could be that there is no analytical solution. Or even worse it could be that these matrices have unbounded spectra after all (in which case I'd really like to know!), but if anyone has any insight into how to solve one of these two problems I'd really appreciate it.

Best Answer

I found the characteristic polynomial in the limit $n\to\infty$: As pointed out above, the characteristic polynomial $c_n(x)=\det(\mathrm A_n-x 1)$ can be written in terms of the $q$-binomial as $$ c_n(x)=\sum_{k=0}^{n/2+1}q^{k(k-1)}\binom{n-k+1}{k}_{\!q} \, (-x)^{n-k}, $$ where $q=e^{-\lambda}$. Inserting the definition of the $q$-binomial $$ \binom{n-k+1}{k}_{\!q} \,= \frac{(q;q)_{n-k+1}}{(q;q)_{n-2k+1}(q;q)_{k}}, $$ and dropping $n$-dependent prefactors, we can perform the limit $n\to\infty$ to get $$ c_\infty(x) = \sum_{k=0}^{\infty}q^{k(k-1)} \frac{\left(-x\right)^{-k}}{(q;q)_{k}}. $$ This sum exactly matches the definition of the basic hypergeometric series, or $q$-hypergeometric function (https://reference.wolfram.com/language/ref/QHypergeometricPFQ.html), $$ {_r \phi_s}(a;b;q;z)=\sum_{k=0}^\infty \frac{(a_1;q)_k \ldots (a_r;q)_k}{(b_1;q)_k \ldots (b_s;q)_k} \left((-1)^k q^{k(k-1)/2}\right)^{1+s-r} \frac{z^k}{(q;q)_k}. $$ The characteristic polynomial of the matrix $\mathrm A_n$ for $n\to\infty$ then becomes the nice expression $$ c_\infty(x) = {_0 \phi_1}\!\left(;0;q;-x^{-1}\right). $$ After I derived this result, a web search revealed that this function is well known. The case ${_0 \phi_1}(;0;q;-q z)$ is also known as Ramanujan function or $q$-Airy function, see page 27 of https://web.math.pmf.unizg.hr/najman_conference/3rd/slides/stovicek.pdf and references therein, where also the zeroes are discussed.

Related Question