Proximal operator of the nuclear norm with non-negativity constraints

convex optimizationlinear algebranuclear normoptimizationproximal-operators

Let $\bf L$ and $\bf R$ be $n \times n$ matrices. Consider the following regularized least-squares problem

\begin{equation}
\mathbf{L} = \min_{\mathbf{L} \geq \mathbf{0}} \mu \|\mathbf{L}\|_* + \dfrac{1}{2\lambda}\|\mathbf{L-R}\|_F^2
\end{equation}

where $\mathbf{L} \geq \mathbf{0}$ denotes that the matrix $\bf L$ is non-negative.

When the non-negativity constraint on $\mathbf{L}$ is not present, I know the solution can be obtained using the singular value thresholding of $\mathbf{R}$. However, I can't figure out the change needed in order to satisfy the non-negativity constraint on $\mathbf{L}$. Will simply setting the negative entries of $\mathbf{L}$ to zero work?

I have checked the question Least-squares with nuclear norm regularization in which $\mathbf{L}$ is positive definite. However, my question is about the entries of $\mathbf{L}$ being non-negative. Can someone please answer this?

Best Answer

You could use the Douglas-Rachford method, which minimizes $f(L) + g(L)$, where $f$ and $g$ are closed convex functions with proximal operators that can be evaluated efficiently. For this problem, you can take $f(L) = \mu \|L\|_*$ and $g(L) = I(L) + \frac{1}{2\lambda} \| L - R \|^2_F$, where $I(L) = 0$ if $L \geq 0$ and $I(L) = \infty$ otherwise.

Related Question