[Math] How to estimate condition number based on SVD of submatrix

condition numbersvd

Given an $m\times n$ ($m\geq n$) real valued matrix, $A$, its SVD, and an $n$-dimensional real valued vector, $x$, is there a computationally efficient way to accurately estimate the condition number of the matrix, $B$, constructed by appending $x$ as an additional row to $A$, e.g. without computing the SVD of B, etc.? For example, projecting $x$ into the effective right null space of $A$?

This is needed in an application where a list of several vectors $x_i$ are given as candidates to extend the matrix $A$ in such a way that the condition number of $B$ is smaller than the condition number of $A$.

My question is related, but not equivalent, to the inverse of the Subset Selection problem (See Golub & Van Loan, Matrix Computations, 3rd ed., pp 590-595). In other words, I would like to take an existing matrix and candidate rows, and constructively build the "best" well-conditioned (albeit over-determined) matrix from these rows, rather than remove them.

For a simple example, consider the matrix

$A=\left(\begin{array}{ccc}
1 & 0 & 0\\
0 & 0.5 & 0\\
0 & 0 & 0.01
\end{array}\right)$

(with condition number $\kappa \left(A\right)=100$) and the candidate vectors

$x_{1}=\left(\begin{array}{ccc}
0 & 1 & 0.1\end{array}\right)$,

$x_{2}=\left(\begin{array}{ccc}0 & 0.1 & 0.05\end{array}\right)$

Extending $A$ by adding $x_1$ as an extra row produces a matrix, $B_1$, with condition number $\kappa \left(B_1\right)\approx 24.55$, whereas extending $A$ by $x_2$ produces a matrix, $B_2$, with condition number $\kappa \left(B_2\right)\approx 19.99$. Is there a computationally inexpensive way to determine $x_2$ is the better choice?

Best Answer

I was able to find the answer to my question by reading this, which also provides a great list of references. I'll leave out the detailed derivation that is presented in those works, and summarize the answer. Keep in mind, I'm only interested in computing the condition number of the updated matrix, and not all the singular values nor any singular vectors.

First, let $A=USV^T$ be the singular value decomposition of $A$. Then the matrices

$B=\left(\begin{array}{c} A\\ v^{T} \end{array}\right)$ and $M=\left(\begin{array}{c} S\\ z \end{array}\right)$ (where $z\equiv v^{T}V$)

have the same singular values. These singular values can be obtained by solving the characteristic equation. I.e. finding the zeros of

$f\left(\sigma\right)\equiv1+\sum_i\frac{z_i^{2}}{S_{ii}^{2}-\sigma^{2}}$

Then there is quite a bit of literature given on how to stably and efficiently solve for these zeros. Fortunately, I only need the smallest and largest ones to compute the condition number. It's easily seen from the form of this function that the old and new singular values are interlaced. Therefore we have a good idea of where to begin looking, and launch an iterative search.