QCQP with bilinear constraints

convex optimizationnonlinear optimizationoptimizationqcqp

I have the optimization problem in $u := \left(u_{1},u_{2}\right)$

$$\begin{array}{ll} \text{minimize} & \frac{1}{2}u^{\top}\Sigma u-au^{\top}\beta\\ \text{subject to} & u_{1}u_{2}\le0\end{array}$$

where matrix $\Sigma$ is positive definite and $a>0$.

The first-order condition is

\begin{equation}
\nabla f=\Sigma u-\beta a=0\Rightarrow u^{*}=a\Sigma^{-1}\beta=\left(\begin{array}{c}
u_{1}^{*}\\
u_{2}^{*}
\end{array}\right)
\end{equation}

If $u_{1}^{*}u_{2}^{*}>0$ and we assume $f\left(u\right)$ attain minimum at $u^{\#}=\left(\begin{array}{c}
u_{1}^{\#}\\
u_{2}^{\#}
\end{array}\right)$
, do we have $u_{1}^{\#}u_{2}^{\#}=0$?

Does anyone know how to show it in a rigorous proof?

Best Answer

Sure, in any problem with a single inequality constraint $$\min_x\ f(x) \quad \mathrm{s.t.} \quad g(x) \leq 0,$$ whether convex or not, the solutions will be of two types:

  1. unconstrained minimizers of $f(x)$ with the inequality constraint inactive, $g(x)<0$;

  2. constrained minimizers with the inequality constraint active, $g(x)=0$.

The geometry here is fairly intuitive: if $g(x)<0$ and yet you're not not at an unconstrained minimum of $f(x)$, you can descent down $-\nabla f$ until you either hit the boundary $g=0$ or reach a minimizer of $f(x)$. This argument can be made rigorous with enough regularity assumptions on $f$ and $g$, etc.

If you can prove there are no solutions of type (1) then all must be of type (2).

Related Question