[Math] Efficient SVD of a matrix without some of the columns

computer sciencecomputer-algebramatricesmatrix analysis

I have a matrix $A \in \mathbb{R}^{p \times q}$ of rank $r$ and its SVD decomposition, i.e,
$$
A = U S V^\top,
$$
where $U \in \mathbb{R}^{p \times r}$ and $V \in \mathbb{R}^{q \times r}$ are orthonormal, $S = \mathrm{diag}\left(s_1, \dots, s_r\right)$, and $s_i \in \mathbb{R}$. I would like to compute SVD for $B = [A_{i_1}, \dots, A_{i_k}] \in \mathbb{R}^{p \times k}$, and $k \leq r$, i.e., for a matrix that consists of a full rank selection of columns from $A$.

If SVD is straightforwardly applied to $B$, then the time complexity would be $\mathcal{O}(pk\min(p,k))$. Can I do that more efficiently?

Note:

  • If you consider the the right singular vectors of $A$, column selection effectively leads to projecting these to a subspace in the canonical basis. Projection does not preserve the angles between the vectors, and hence one cannot do better than just re-doing the SVD on $B$.
  • However, does there exist a rotation of the $V$ singular vectors that preserves the angles and it could be done more efficiently than just running SVD on $B$?

Best Answer

After some research, I managed to find a reasonable answer. The operation is called updating (in case of adding new columns to the original matrix) or donwdating (in case of removing) of the SVD. Full update/downdate of SVD with a single column can be done in $\mathcal{O}\left(r^2(1 + p + q)\right)$ time [1]. Such modifications of SVD can be also parallelized.

Here are a few links to papers that deal with this problem:

  1. Fast low-rank modifications of the thin singular value decomposition
  2. A stable and fast algorithm for updating the singular value decomposition
  3. A fast and stable algorithm for downdating the singular value decomposition
Related Question