SDP reformulation with range space constraint

convex optimizationoptimizationsemidefinite-programming

In page 6 of the paper A semidefinite programming method for integer convex
quadratic minimization
, it is stated that the following maximization problem

\begin{align}
\text{maximize} & \quad -\left(q + \frac{1}{2}\lambda \right)^\top \left(P – \text{diag}(\lambda)\right)^\dagger \left(q + \frac{1}{2}\lambda \right)\\
\text{subject to} & \quad P – \text{diag}(\lambda) \succeq 0 , \\
& \quad q + \frac{1}{2}\lambda \in \mathcal{R}(P – \text{diag}(\lambda)) \\
& \quad \lambda \geq 0
\end{align}

is equivalent to
\begin{align}
\text{maximize} & \quad -\gamma \\
\text{subject to} & \quad
\begin{bmatrix}
P – \text{diag}(\lambda) & q + \frac{1}{2}\lambda \\
\left(q + \frac{1}{2}\lambda \right)^\top & \gamma
\end{bmatrix} \succeq 0 \\
& \quad \lambda \geq 0,
\end{align}

where $P \in \mathbb{R}^{n\times n}$ symmetric, $q \in \mathbb{R}^n$, $\lambda \in \mathbb{R}^n$, $\gamma \in \mathbb{R}$, $A^\dagger$ is the Moore-Penrose inverse of the matrix $A$, diag($\lambda$) is the diagonal matrix with $\lambda$ in the main diagonal, $A\succeq 0$ means that $A$ is positive semi-definite, $\lambda \geq 0$ means that all the components of $\lambda$ are non-negative and $b \in \mathcal{R}(A)$ means that $b$ is in the range space of $A$.

I understand how we can use Schur complements to reformulate the SDP. However, the point I do not understand is why we can drop the range space constraint $q + \frac{1}{2}\lambda \in \mathcal{R}(P – \text{diag}(\lambda))$? I do not see how this constraint is enforced in the reformulation.

Best Answer

It has not been dropped. It is implicit in the Schur complement being applicable here in this non-strict case. If the condition does not hold, the lower matrix inequality cannot hold.

Related Question