Can I reformulate the given SDP such that the main constraint becomes and LMI

linear-matrix-inequalityoptimizationrelaxationssemidefinite-programming

I am new to SDP and LMI's and trying to solve an optimization problem of the following form:
\begin{equation}
\begin{aligned}
\text{maximize} \quad & \sum_{j=1}^k w_j\\
\text{subject to}
\quad & D – N^* W^* D W N \succeq 0\\
\quad & d_j > 0 \ \forall \ j = 1,\dots,k\\
\end{aligned}
\end{equation}

where $D = \textrm{diag}(d_1,\dots,d_k) \in \mathbb{R}^{n\times n}$, $W = \textrm{diag}(w_1,\dots,w_k) \in \mathbb{R}^{n\times n}$ and therefore $W = W^*$ and some given $N \in \mathbb{C}^{n\times n}$.
Using Schur complement I am able to relax this problem to the following SDP:
\begin{equation}
\begin{aligned}
\text{maximize} \quad & \sum_{j=1}^k w_j\\
\text{subject to}
\quad & \left[\begin{array}{cc}
D^{-1} & W N \\
N^* W & D
\end{array}\right] \succeq 0\\
\quad & d_j > 0 \ \forall \ j = 1,\dots,k\\
\end{aligned}
\end{equation}

However, this results in a $D^{-1}$ term that still makes it difficult for me to solve the problem.

Since D is diagonal, in order to solve the problem I could define $F := D^{-1} := \textrm{diag}(f_1,\dots,f_k)$ and add the constraint $f_jd_j = 1 \ \forall \ j = 1,\dots,k$. However, this is ofcourse still a nonlinear constraint.

Is there anything I could do to linearize this problem such that I can use standard tooling to sovle the optimization problem? Alternatively, should I approach the problem differently to prevent the issue?

I feel like there should be a simple solution but I lack the intuïtion to find it.

Best Answer

There is a way based on the so-called cone complementary algorithm. To show that, let us define the matrix $F:=D^{-1}$. Obviously, those matrices verify the following nonconvex constraint $F=D^{-1}$

This can be relaxed into $F\succeq D^{-1}$ and, by virtue of the Schur complement, this is equivalent to

$$\begin{bmatrix}F & I\\I & D\end{bmatrix}\succeq0.$$

The issue now is to make sure that we have $FD=I$. So, we can use the following iterative algorithm

  1. Set $i=0$, and $F_0,D_0$.
  2. Solve $\min_{F,D}\mathrm{trace}(D_iF+DF_i)$ with the above inequality constraint.
  3. If $\mathrm{trace}(D_iF+DF_i)=2n$ then stop, otherwise set $i=1+1$, $F_i=F$, $D_i=D$ and go to Step 2.

If this algorithm converges, it will converges to values for $F$ and $D$ such that $FD=I$.

Now in your case, it is more complicated because you already have an optimization problem. The idea is to introduce a decision variable $\gamma$ a consider the problem

$$\begin{equation} \begin{aligned} \text{maximize} \quad & \gamma\\ \text{subject to} \quad & \gamma \le \sum_{j=1}^k w_j\\ \quad & \left[\begin{array}{cc} F & W N \\ N^* W & D \end{array}\right] \succeq 0\\ \quad & d_j,f_j > 0 \ \forall \ j = 1,\dots,k\\ \quad & \begin{bmatrix}D & I\\I & F\end{bmatrix}\succeq0 \end{aligned} \end{equation}$$

Then, you can do a bisection approach over $\gamma$ and at each iteration of the bisection you consider the cone complementary algorithm with all the constraints above in order to try to solve the nonconvex problem. If it fails, then decrease $\gamma$, otherwise increase it, until you converge.

Related Question