[Math] Lower bound on smallest singular value of arbitrary square matrix

convex optimizationeigenvalues-eigenvectorsmatricesnumerical linear algebra

the background of my question is that I want to calculate an arbitrary square matrix $A \in \mathbb{R}^{n \times n}$ in a convex optimization problem and ensure it is regular, i.e., it can be inverted. For this reason, I would like to add a second, also convex summand to the optimization problem, or, alternatively, add a convex inequality constraint to the optimization problem. The addition of, e.g., $-log(|det(A)|)$ to the objective function ensures invertibility, but this term is not convex.

Ensuring invertibility in a convex way can be ensured by the methods described in Bounds for determinants with positive diagonals for diagonally dominant matrices or, allowing for a bit more general matrices, Verified bounds for singular values, in particular for the spectral norm of a matrix and its inverse (e.g., Lemma 2.1). This, however, restrics the matrices to have a certain structure, while general matrices would be much better for the optimization I have in mind.

From my research, it seems to be most promising to me to consider the smallest singular value of $A$ which equals the largest singular value of its inverse:

$\sigma_{\mathrm{min}}(A)=\frac{1}{\lVert A^{-1}\rVert_2}$,

where $\sigma_{\mathrm{min}(A)}$ denotes the smallest singular value of $A$ and $\lVert\lVert_2$ the spectral norm, i.e., the largest singular value. My key question is if one can obtain a (concave) positive lower bound for the smallest singular value using $\sigma_{\mathrm{min}}(A)=\frac{1}{\lVert A^{-1}\rVert_2}$? Maximizing this concave lower bound ensures that the eigenvalues of $A$ have absolute value $>0$, see this question. The bound itself can be very inaccurate, the main thing is that $\sigma_{\mathrm{min}}(A)>0 $ is ensured. As far as I have seen, using the triangle inequality (subadditivity) and submultiplicativity of matrix norms only leads to upper bounds for $\sigma_{\mathrm{min}}(A)$, but not to lower ones. Maybe it is possible to play with subadditivity and submultiplicativity as well as other matrix norm properties to obtain such a lower bound that is dependent on $A$ or its elements, but not on its inverse to ensure easy optimization?

Answers on this specific idea (lower bound for minimal singular value) or on how to ensure invertibility in a convex way in general would be greatly appreciated.

Best Answer

You may consider $\|A^{-1}\|_1$ instead of $\|A^{-1}\|_2$.

In order to approximate $\|A^{-1}\|_1$ you may use the algorithm of Hager Hager, W. W. (1984), Condition estimates, SIAM Journal on Scientific and Statistical Computing, 5(2), 311-316, which approximates the first norm of a linear operator $X$, $\|X\|_1$ using matrix-vector product only. Therefore in order to estimate $\|A^{-1}\|_1$ using this algorithm an efficient method of solving linear equations $A x = b$ is required.

This algorithm is implemented in Matlab, function condest for sparse and dense matrices, and in Lapack's function ?LACON.

Related Question