[Math] SDP formulation of noisy low rank matrix completion

convex optimizationmatrix analysismatrix completionsemidefinite-programming

Exact low rank matrix completion using nuclear norm minimization can be formulated as a semidefinite program (SDP). Following the notation in the paper, a convex problem for noisy matrix completion can be:

$$\text{minimize} \,\, \|X\|_* \quad \text{subject to} \quad \|\mathcal{P}_\Omega(X)-\mathcal{P}_\Omega(M)\|_2\le\epsilon \tag{*}$$

where $\mathcal{P}_\Omega(X)$ is the projection of $X$ onto the set of observed entries $\Omega$. This problem can be formulated as the SDP below SDP in $X, W_1, W_2$

$$\begin{array}{rl} \text{minimize} & \text{trace}(W_1)+\text{trace}(W_2)\\ \text{subject to} &
\left[ \begin{array}{cc} \epsilon I & \mathcal{P}_\Omega(X-M) \\
\mathcal{P}_\Omega(X-M)^T & \epsilon I
\end{array} \right] \succeq 0\\ & \left[ \begin{array}{cc}
W_1 & X \\
X^T & W_2 \end{array} \right] \succeq 0\end{array}$$

The question is when the $2$-norm constraint of problem (*) is replaced by the Frobenius norm constraint, i.e.,

$$\text{minimize} \,\, \|X\|_* \quad \text{subject to} \quad \|\mathcal{P}_\Omega(X)-\mathcal{P}_\Omega(M)\|_F \le \epsilon$$

it can still be cast as a SDP as stated in here and many other papers. Could anyone tell me how to formulate this problem as a SDP?


Edit: follow up question.

Best Answer

The Frobenius norm does not cause a problem. Remember, the Frobenius norm of a matrix $X$ is actually nothing more than the 2-norm of the vector formed by stacking the columns of $X$ on top of each other. And for vectors, the vector 2-norm and the matrix 2-norm coincide. So if $\mathcal{Q}$ implements this "stacking" isomorphism, we have $$\|\mathcal{P}_\Omega(X-M)\|_F=\|\mathcal{Q}(\mathcal{P}_\Omega(P-M))\|_2$$ and $$\|\mathcal{P}_\Omega(X-M)\|_F\leq\epsilon \quad\Longleftrightarrow\quad \begin{bmatrix} \epsilon I & \mathcal{Q}(\mathcal{P}_\Omega(X-M)) \\ \mathcal{Q}(\mathcal{P}_\Omega(X-M))^T & \epsilon \end{bmatrix} \succeq 0$$ In fact, handling the Frobenius norm is actually somewhat easier than the spectral norm in practice, because most solvers handle vector 2-norms more directly and efficiently. See second-order cone programming or "semidefinite quadratic linear programming" (SQLP) for more background.

Related Question