[Math] Why is minimizing the nuclear norm of a matrix a good surrogate for minimizing the rank

linear algebramatricesmatrix-ranknormed-spacesnuclear norm

A method called "Robust PCA" solves the matrix decomposition problem

$$L^*, S^* = \arg \min_{L, S} \|L\|_* + \|S\|_1 \quad \text{s.t. } L + S = X$$

as a surrogate for the actual problem

$$L^*, S^* = \arg \min_{L, S} rank(L) + \|S\|_0 \quad \text{s.t. } L + S = X,$$ i.e. the actual goal is to decompose the data matrix $X$ into a low-rank signal matrix $L$ and a sparse noise matrix $S$. In this context: why is the nuclear norm a good approximation for the rank of a matrix? I can think of matrices with low nuclear norm but high rank and vice-versa. Is there any intuition one can appeal to?

Best Answer

Why does compressed sensing work? Because the $\ell_1$ ball in high dimensions is extremely "pointy" -- the extreme values of a linear function on this ball are very likely to be attained on the faces of low dimensions, those that consist of sparse vectors. When applied to matrices, the sparseness of the set of eigenvalues means low rank, as @mrig wrote before me.