[Math] Singular values of sequence of growing matrices

linear algebramatricesra.rings-and-algebras

I asked this question on math.stackexchange and haven't received an answer in two weeks, so I'm repeating it here.

Let

$$
H=\left(\begin{array}{cccc}
0 & 1/2 & 0 & 1/2 \cr
1/2 & 0 & 1/2 & 0 \cr
1/2 & 0 & 0 & 1/2\cr
0 & 1/2 & 1/2 & 0
\end{array}\right),
$$

$K_1(\alpha)=\left(\begin{array}{c}1 \\\\ \alpha\end{array}\right)$ and consider the sequence of matrices defined by
$$
K_L(\alpha) = \left[H\otimes I_{2^{L-2}}\right]\left[I_2 \otimes K_{L-1}(\alpha)\right],
$$
where $\otimes$ denotes the Kronecker product, and $I_n$ is the $n\times n$ identity matrix.

I am interested in the limiting behaviour of the singular values of $K_L(\alpha)$ — in particular, $K_L(0)$ — as $L$ tends to infinity. Some calculation indicate that the $2^L\times 2^{L-1}$-matrix $K_L$ has $L$ non-zero singular values and that, for any positive integer $k$, the $k$ largest singular values converges to some limit.

Question: Can this limit be described in terms of the matrix $H$?

I did some experiments and it seems that the limiting behaviour of the singular values of $K_L$ does not only depend on the matrix $H$, but also on the initial value $K_1(\alpha)$. This makes it unlikely for fixed-point arguments to work in this setting.

I also tried to obtain combinatorial expressions for the coefficients in the characteristic polynomial $\chi_L^\alpha(\lambda)$ of $K_L(\alpha)K_L(\alpha)^T$ but was successful only for the three highest non-trivial powers of $\lambda$.

Edit:

The analysis of $\Sigma(\alpha):=\lim_{L\to\infty}\sigma_1(K_L(\alpha))$ as a function of $\alpha$, as suggested by Suvrit, seems to be a good idea. Numerical calculations indicate that, asymptotically,
$$
\Sigma(\alpha)\sim \Sigma(0)\left(.3540+\alpha\right),\quad \alpha\to\infty,\quad \Sigma(0)\approx .8254,
$$
and that $\Sigma(\alpha)$ has a minimum at $\approx(-.2936,.7696)$.

I do not see yet, however, if this can be used to compute $\Sigma(0)$ more precisely.

Edit:

Using the improved bound $\sigma_1(K_L)\leq \frac{1}{2}\sqrt{3+2\alpha +3 \alpha ^2}$, which is sharp for $\alpha=\pm 1$, we can deduce that
$d/d\alpha \Sigma(-1)=-1/2$, and $d/d\alpha \Sigma(1)=1/\sqrt{2}$.

Edit3:

After staring at the problem a little longer I've come up with a conjecture for the characteristic polynomial $\chi_L^{\alpha}(\lambda)$ of $K_L(\alpha)K_L(\alpha)^T$. More precisely, I believe that
$$
\lambda^{-2^L+L}\chi_L^{\alpha}(\lambda) =\lambda ^L-\left(1+\alpha ^2\right)\lambda ^{L-1}+\\\\
+\frac{1}{2^L-1}\sum _{k=2}^L \left(-2\right)^{-k}\left[(1-\alpha)^{2(k-1)}\right]\left[(1-\alpha )^2+k (1+\alpha )^2 \right]\frac{(2^{-L};2)_k}{[k]_2!}\left(2^k-2+2^L\right)\lambda^{L-k},
$$
where $(a;q)_k$ denotes a q-Pochhammer symbol and $[k]_q!$ denotes a q-factorial.

It appears that this formula implies that $\lim_{L\to\infty}\sigma_1(K_L(0))$ can be characterized as $\kappa^{-1/2}$, where $\kappa$ is the smallest positive zero of $x\mapsto f(-x/2)$ and
$$
f:x\mapsto\sum_{k=0}^\infty{\frac{(k+1)x^k}{[k]_2!}}.
$$
Interestingly, this function is related to the q-exponential.

More generally, $\lim_{L\to\infty}\sigma_1(K_L(\alpha))$ can apparently be characterized as $\kappa_\alpha^{-1/2}$, where $\kappa_\alpha$ is the smallest positive solution of $x\mapsto f_\alpha(-x/2)=0$ and
$$
f_\alpha:x\mapsto\sum_{k=0}^\infty{\frac{(1-\alpha )^{2 (k-1)} \left[(1-\alpha )^2+k (1+\alpha )^2\right]x^k}{[k_2]!}}.
$$
The other singular values are similarly obtained from the remaining zeros of $x\mapsto f_\alpha(-x/2)$.

The next task will be to say something about the singular vectors.

Best Answer

First let's change the matrix $H$ to $H=\frac{1}{2}\left(\begin{array}{cc}I_2 & I_2 \\ I_2& P_2\end{array}\right)$, where $P_2$ is a 2x2 permutation matrix. This swapping of the rows of $H$ won't affect the singular values of the matrices. Now lets consider what affect the iteration has on a matrix of the form $\left(\begin{array}{cc}X &x \\ X &y\end{array}\right)$, where $X$ has $n$ columns. At the next iteration we will have $$ \frac{1}{2}\left(\begin{array}{cccc}X &x & X &x \\ X &y& X & y\\ X&x &X & y\\ X & y& X & x\end{array}\right)$$ To account for the linear dependence of the columns, we multiply on the right by

$$\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}I_n & 0 &0 \\ 0 & 1& 0 \\ \frac{1}{\sqrt{2}}I_n & 0 &0 \\ 0 & 0& 1\end{array}\right)$$ The resulting matrix is $$ \frac{1}{2}\left(\begin{array}{ccc}\sqrt{2}X &x &x \\ \sqrt{2}X &y& y\\ \sqrt{2}X&x & y\\ \sqrt{2}X & y& x\end{array}\right)$$
We note that if the original matrix had orthonormal columns then so will this matrix.

This accounts for why the rank of the matrix increases by one with each iteration. Also note that if $$x=y$$ then the rank will always be one. If we do this reduction at each step, you find that for the starting $K_1=[1;0]$ the matrix $G=K_L^TK_L$ appears to be in the limit a Hankel matrix plus a diagonal matrix with diagonal entries $G_{ii}=(\sqrt{2})^{-2i})$ and off diagonal entries $G_{ij}=(\sqrt{2})^{-i-j-2}$. This gives you a very nice matrix but it is only for this particluar starting vector. If you use a starting vector of $K_1=[1 ;1]/\sqrt{2}$ then $K_L$ will always have rank one and have as it's largest singular value one.

Based on the observation above, consider the matrix $$K_1=\frac{1}{\sqrt{2}}\left(\begin{array}{rr} 1 & 1\\ 1 & -1 \end{array}\right)\left(\begin{array}{c}\alpha \\ \beta\end{array}\right)$$ such that $$K_1$$ has a 2-norm of one. Applying the iteration to this matrix we have $$K_2=\frac{1}{2\sqrt{2}}\left(\begin{array}{rrrr} 1 & 1 & 1 &1 \\ 1 & -1 &1 &-1\\ 1& 1 & 1&-1\\1& -1 & 1 &1 \end{array}\right)\left(\begin{array}{cc}\alpha &0\\ \beta & 0 \\ 0 & \alpha \\ 0& \beta\end{array}\right)$$ Accounting for the nullspace as outlined above requires multiplying by the transpose of the matrix used to remove the nullspace. $$\left(\begin{array}{cccr} \frac{1}{\sqrt{2}}I_1 & 0 &\frac{1}{\sqrt{2}}I_1&0\\ 0& 1& 0&0 \\ 0 & 0& 0 & 1\\ \end{array}\right) \left(\begin{array}{cc}\alpha &0\\ \beta & 0 \\ 0 & \alpha \\ 0& \beta\end{array}\right) $$ We then have for $K_2$, $$ \frac{1}{2\sqrt{2}}\left(\begin{array}{rrr} \sqrt{2} & 1 &1 \\ \sqrt{2} & -1 &-1\\ \sqrt{2}& 1 & -1\\ \sqrt{2}& -1 & 1 \end{array}\right)\left(\begin{array}{cc}\frac{1}{\sqrt{2}} \alpha & \frac{1}{\sqrt{2}}\alpha\\ \beta & 0 \\ 0 & \beta\end{array}\right) $$ Noticing that the matrix on the left has orthogonal matrix, we make a modification so that the matrix will have orthonormal columns which means that we only need to understand what the iteration is doing to the matrix on the right.
$$ \frac{1}{2}\left(\begin{array}{rrr} 1 & 1 &1 \\ 1 & -1 &-1\\ 1& 1 & -1\\ 1& -1 & 1 \end{array}\right)\left(\begin{array}{cc}\frac{1}{\sqrt{2}} \alpha & \frac{1}{\sqrt{2}}\alpha\\ \frac{1}{\sqrt{2}}\beta & 0 \\ 0 & \frac{1}{\sqrt{2}}\beta\end{array}\right) $$. All these same steps can be taken at the next iteration. The resulting $\alpha$,$\beta$ matrix will be $$K_3=\left(\begin{array}{cccc}\frac{1}{2}\alpha &\frac{1}{2}\alpha &\frac{1}{2}\alpha &\frac{1}{2}\alpha \\ \frac{1}{{2}}\beta&0 &\frac{1}{{2}}\beta&0\\0&\frac{1}{{2}}\beta & 0 &0\\ 0 & 0&0&\frac{1}{{2}}\beta\end{array}\right)$$

In general the $\alpha$,$\beta$ matrix will have the form $$\left(\begin{array}{c}a^T\\B\\b^T\end{array}\right)$$ with the $\alpha$,$\beta$ matrix for the next iteration being $$\frac{1}{\sqrt{2}}\left(\begin{array}{c}a^T &a^T\\B&B\\b^T& 0\\0 & b^T\end{array}\right)$$ The matrix $B$ and vector $b$ depend only on $\beta$ and at the $L$th iteration it will have entries of zero and $\beta 2^{\frac{1-L}{2}}$. At the $L$th iteration the vector $a$ has entries $\alpha 2^{\frac{1-L}{2}}$. The matrix $B$ will have $L-1$ rows. The $i$th row of $B$ will have $2^{L-i-1}$ nonzero entries and rows of $B$ will be orthogonal to one another as well as to the vector $b$. The matrix will be $2^{L-1}$ wide. The singular values of the $\alpha$,$\beta$ matrix determine the singular values of $K_L$. Due to the structure described multiplying the $\alpha$,$\beta$ matrix times its transpose yields an arrowhead matrix

$$\left(\begin{array}{ccccc}\alpha^2 & \alpha\beta 2^{-1} & \alpha\beta 2^{-2}& \alpha\beta 2^{-3}& \ldots \\ \alpha\beta 2^{-1} & \beta^2 2^{-1} & \ &\ & \ \\ \alpha\beta 2^{-2} & & \beta^2 2^{-2} & &\\ \vdots & &&\ddots &\\ \end{array}\right)$$ whose eigenvalues are the roots of $$f(\lambda)=\alpha^2-\lambda-\sum_{i=1}^\infty\frac{\frac{\alpha^2\beta^2}{2^{2i}}}{\frac{\beta^2}{2^i}-\lambda}$$

Related Question