Box-constrained QCQP

non-convex-optimizationnonlinear optimizationoptimizationqcqp

Let ${\bf A} \in \mathbb{R}^{n \times n}$ be a symmetric positive semidefinite (PSD) matrix, let ${\bf a} \in \mathbb{R}^n$ and let ${\bf B} \in \mathbb{R}^{n \times n}$ be a symmetric indefinite matrix with symmetric eigenvalues ($\pm \lambda_i \neq 0$). Let $c > 0$. Consider the optimization problem

$$ \begin{array}{ll} \underset {{\bf x} \in \mathbb{R}^n} {\text{minimize}} & {\bf x}^\top {\bf A} \, {\bf x} + {\bf a}^\top {\bf x} \\ \text{subject to} & {\bf x}^\top {\bf B} \, {\bf x} = 0\\ & 0 \le x_i \le c \end{array} $$

I am aware of the fact that this is not a convex problem because of the quadratic equality constraint. Nevertheless, I did find a paper that proposes methods for quadratic optimization under just one quadratic equality constraint. But this doesn't quite fit my problem.

How would one tackle it, and ideally, are there good free solvers available?

Best Answer

This is really more of a comment than a complete answer, but in case you are not aware, a slightly simpler version of your quadratic program can be reformulated as,

\begin{align} \text{minimize}\;\;&\langle A,X\rangle\\[4pt] \text{subject to}\;\;&\langle B,X\rangle=0\\[1pt] &\text{rank}(X)=1\\[1pt] &X\succeq 0 \end{align}

where the final two constraints ensure that $x$ may be recovered from $X$ via its outer product (i.e. $X=xx^T$).

Your actual problem can be handled in a similar fashion by first using a process called homogenization.

The non-convex aspect has now been isolated in the rank constraint, which, if removed, will yield a semidefinite relaxation of your quadratic program, this relaxation is canonical, and people have studied it in detail (I have not however).