[Math] How to generate a random matrix whose eigenvalues are less than one

eigenvalues-eigenvectorslinear algebramatrices

I generate a random matrix. It has this general form:

$$\mathbf{B}=\left[ \matrix{ \mathbf{A}_1&\mathbf{A}_2&\ldots&\mathbf{A}_{p-1}&\mathbf{A}_p \\
\mathbf{I}_M&\mathbf{I}_M&\ldots&\mathbf{I}_M&\mathbf{O}_M \\
\vdots &\vdots &\ddots& \vdots& \vdots\\
\mathbf{I}_M&\mathbf{I}_M&\ldots&\mathbf{I}_M&\mathbf{O}_M \\ } \right]:(pM \times pM)$$

in which $\mathbf{A}_i$ is $M \times M$ for $i=1,\ldots,p$ and $\mathbf{O}_M$ ($M\times M$) and $\mathbf{I}_M$ ($M\times M$) are zero and identity matrices.

The elements of $\mathbf{A}_i$ are random (they are from a multivariate normal distribution). The eigenvalues of $\mathbf{B}$ should be less than one and I don't want to repeat random number generation process until this happens. I want to change some elements of a generated matrix (whose at least one of its eigenvalues is larger than one) so that all eigenvalues become less than one. Is there any way to do so?

(I think I should answer this question: If I change the $B(i,j)$, how will eigenvalues of $B$ be affected? But I don't know the answer).

edit:
I have this idea: What if I generate a set of random eigenvalues and then generate my matrix based on them. I don't know how to proceed.

Thanks.

Best Answer

I don't have an answer to your question, but here are some thoughts.

If your normal distribution is able to generate a $B$ whose spectral radius is $\ge1$, then any procedure to modify $B$ to make its spectral radius less than one would clearly result in non-conformance to the normal distribution. So, I assume that you want to modify $B$ such that it follows a conditional distribution of normal distribution.

This seems to me a difficult problem, but the difficulty goes beyond that. In fact, when $p$ and $M$ are large, any such procedure would necessarily mean that you are drawing samples from the tails of normal distribution, i.e. you are mostly using extreme events as samples.

Here is an explanation. Let $v=(v_1^T,v_2^T,\ldots,v_n^T)^T$ be an eigenvector of $B$. If $v$ corresponds to the zero eigenvalue, then the equation $Bv=0$ means $$ \begin{pmatrix} A_1 & A_2 & \ldots & A_{p-1} & A_p\\ I & I & \ldots & I & 0 \end{pmatrix}v=0 $$ The rank of the matrix on the LHS is almost surely $2M$, so its null space is $(p-2)M$-dimensional and $B$ almost surely has $2M$ nonzero eigenvalues.

Now, suppose $\lambda$ is a nonzero eigenvalue of $B$. From $Bv=\lambda v$, we get \begin{align} A_1v_1 + \sum_{i=2}^p A_pv_p &= \lambda v_1,\\ v_1 + v_2+v_3+\ldots+v_{p-1} &= \lambda v_2,\\ v_1 + v_2+v_3+\ldots+v_{p-1} &= \lambda v_3,\\ &\vdots\\ v_1 + v_2+v_3+\ldots+v_{p-1} &= \lambda v_p. \end{align} In other words, $v_2=v_3=\ldots=v_p=$ some vector $u$, $v_1 = [\lambda-(p-2)]u$ and \begin{align} A_1[\lambda-(p-2)]u + \sum_{i=2}^p A_pu = \lambda [\lambda-(p-2)]u,\\ \textrm{i.e. }\quad C_\lambda u:=\left(\lambda^2 I - \lambda [(p-2) I + A_1] + \left[(p-2)A_1 - \sum_{i=2}^p A_p\right]\right)u = 0. \end{align} Let $q(\lambda) = \det\,C_\lambda$. Then $q(\lambda)=0$ is a degree-$2M$ polynomial equation in $\lambda$. The sum $S$ of its $2M$ roots is the coefficient of $\lambda^{2M-1}$ in $q(\lambda)$, i.e. $S = \mathrm{trace}\left((p-2) I + A_1\right) = M(p-2)+\mathrm{trace}\,A_1$. In order that these $2M$ roots all have moduli smaller than $1$, $|S|$ must be smaller than $2M$. Hence $\left|(p-2)+\frac{\mathrm{trace}\,A_1}{M}\right|<2$. When $M$ is large, however $\frac{\mathrm{trace}\,A_1}{M}$ will be distributed around the mean $\mu$ of the normal distribution, with variance of order $O(1/M)$. Since $\left|(p-2)+\mu\right|\ge2$ when $p$ is large, we conclude that, when both $M$ and $p$ are large, the condition $|S|<2$ is satisfied only when $\frac{\mathrm{trace}\,A_1}{M}$ is an extreme value.

Related Question