Optimizing over vector and Matrix at the same time.

convex optimizationconvex-analysispositive-semidefinitesemidefinite-programming

I want to know if my understand (prove convexity, format as SDP) the following problem is correct:
\begin{equation*}
\begin{aligned}
& \min_{c\in \mathbb{C}^n,D\in \mathbb{H}_+^n} && \|c\|_2 + \lambda \cdot \mathrm{Tr}(D) \\
& \text{subject to} && h^Tc + \mathrm{Tr}(H^TD) = b\\
&&& D \succeq 0
\end{aligned}
\end{equation*}

where $D \in \mathbb{H}_+^n$ is a Hermitian Positve-Semidefinite Matrix.

This problem is derived from Trace Heuristic of Matrix rank-minimization, but is regularized with a vector $c$ and the equality constraint has term $c$ in it.

Proof Convexity

To show that the problem above is a minimizing a convex function over a convex set, I show that the objective is jointly convex in $(c,D)$ and the equality constraint admits a convex set over $(c,D)$. The SDP cone constraint is convex and does not need to be shown.

From here, I write $x = (c,D) \in \mathbb{C}^n \times \mathbb{H}_+^n$ and write $\mathbf{X} = \mathbb{C}^n \times \mathbb{H}_+^n$

Convex Objective

For any $\theta \in [0,1]$, for given $x_1=(c_1,D_1),x_2=(c_2,D_1) \in \mathbf{X}$, we denote convex combination as $x_3 = \theta x_1 + (1-\theta) x_2$, noting that $x_3 =(c_3,D_3) \in \mathbf{X}$.

\begin{equation}
\begin{split}
f(\theta x_1 + (1-\theta) x_2) &= \|\theta c_1 + (1-\theta)c_2\|_2 +\lambda \cdot \mathrm{Tr}(\theta D_1 + (1-\theta) D_2) \\
&\leq \theta\|c_1\|_2 +(1-\theta)\|c_2\|_2 + \theta \lambda \mathrm{Tr}(D) + (1-\theta)\lambda\mathrm{Tr}(D) \\
&= \theta f(x_1) + (1+\theta)f(x_2)
\end{split}
\end{equation}

Convex Set

Simply by the affine nature of the equality.

SDP

Let $\mathbf{Y} = \begin{bmatrix} diag(c) & 0 \\ 0 & D\end{bmatrix}$
Re-formulate the problem as :
\begin{equation*}
\begin{aligned}
& \min_{Y\in \mathbb{H}_+^{2n}} && \langle \begin{bmatrix} diag(c) & 0 \\ 0 & \lambda I_n\end{bmatrix} , Y\rangle_{\mathbb{H}^n_+} \\
& \text{subject to} && \langle \begin{bmatrix} diag(h) & 0 \\ 0 & H\end{bmatrix}, Y\rangle_{\mathbb{H}^n_+} =b \\
&&& Y \succeq 0 \quad (\dagger)
\end{aligned}
\end{equation*}

($\dagger$): This inequality is stronger in the sense that it forces $Re(c) \geq 0$ whereas the original formulation has no such constraint.

My Questions

  1. Is my proof of convexity correct?
  2. How to formulate the problem as an SDP? Specifically, what do with the inequality constraint?

Best Answer

  1. No, you seem to think that a convex set is sufficient, but you also need the convex set to be defined by convex functions (which is the case here). For the objective you can just state that the sum of convex functions is convex.

  2. You need to write $c$ as the difference of two nonnegative vectors (resulting in an extra block row/column in $Y$). This is the same as formulating a linear optimization problem in standard form where it initially has a free variable.

Related Question