[Math] A non-convex quadratically constrained quadratic program (QCQP)

non-convex-optimizationoptimizationqcqp

How can I solve the following quadratically constrained quadratic optimization problem?

$$\max_{x} \qquad x'Ax$$
such that
$$x'P_ix – k \leq 0 \qquad i=1,2$$

$A, P_i$ are positive semidefinite matrix. I know we can minimize $x' \hat{A} x$ where $\hat{A} = -A$ but this changes the problem to concave one with convex constraints. $x \in \mathbb{R^2}$. If it helps, finding the minimum of the inverse of $x' A x$ is also an option.

The objective function in that case would be $$\min_x \qquad \frac{1}{x'Ax}$$

Best Answer

So given your functions are positive semidefinite, there are a number of algorithms you can use (see: https://en.wikipedia.org/wiki/Quadratic_programming#cite_note-6, citation 6). But for this problem its simple enough we don't need such techniques:

Given $x \in \mathbb{R}^2$ we wish to solve

$$ \max x^T A x \\ x^T P_1 x \le k, x^T P_2 x \le k$$

This can actually be completely concretely spelled out by letting $A = \begin{pmatrix} A_{00} & A_{01} \\ A_{10} & A_{11} \end{pmatrix}$ and

$$P_1 = \begin{pmatrix} P_{001} & P_{011} \\ P_{101} & P_{111} \end{pmatrix}$$

$$P_2 = \begin{pmatrix} P_{002} & P_{012} \\ P_{102} & P_{112} \end{pmatrix}$$

Then it follows that we wish to solve

$$ \max x_0 (A_{00} x_0 + A_{01} x_1) + x_1(A_{10} x_0 + A_{11} x_1) \\ x_0 (P_{001} x_0 + P_{011} x_1) + x_1(P_{101} x_0 + P_{111} x_1) -k \le 0 \\ x_0 (P_{002} x_0 + P_{012} x_1) + x_1(P_{102} x_0 + P_{112} x_1) -k \le 0 $$

We rearrange terms here to yield:

$$ \max A_{00} x_0^2 + (A_{01}+ A_{10} )x_1x_0 + A_{11} x_1^2 \\ P_{001} x_0^2 + (P_{011}+P_{101}) x_0x_1 + P_{111} x_1^2 -k \le 0 \\ P_{002} x_0^2 + (P_{012} +P_{102}) x_0x_1 + P_{112} x_1^2 -k \le 0 $$

We can now directly pull out the $KKT$ conditions (a generalization of lagrange multipliers) https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions

which tell us that the optimal solution $x^*$ to this problem will satisfy the conditions (assuming $f = A_{00} x_0^2 + (A_{01}+ A_{10} )x_1x_0 + A_{11} x_1^2$, $p_1 = P_{001} x_0^2 + (P_{011}+P_{101}) x_0x_1 + P_{111} x_1^2 -k$, $p_2 = P_{002} x_0^2 + (P_{012} +P_{102}) x_0x_1 + P_{112} x_1^2 -k $) :

Conditions:

$$\nabla f(x^*) = \mu_1 \nabla p_1(x^*) + \mu_2 \nabla p_2 (x^*)$$ $$ p_1(x^*) \le 0 $$ $$ p_2(x^*) \le 0 $$ $$ \mu_1 \ge 0, \mu_2 \ge 0$$ $$ \mu_1 p_1(x^*) = 0, \mu_2 p_2(x^*) = 0$$

Lets tackle the first line with the $\nabla$'s by unpacking it for our case:

$$ \begin{bmatrix} 2A_{00}x_0 + (A_{01} + A_{01})x_1 = \mu_1 (2P_{001}x_0 + (P_{011} + P_{011})x_1) + \mu_2 (2P_{002}x_0 + (P_{012} + P_{012})x_1) \\ 2A_{11}x_1 + (A_{01} + A_{01})x_0 = \mu_1 (2P_{112}x_1 + (P_{011} + P_{012})x_0) + \mu_2 (2P_{112}x_1 + (P_{012} + P_{012})x_0)\end{bmatrix}$$

Now look at the very last 2 equations of the form $[\mu_1 p_1(x^*) = 0, \mu_2 p_2(x^*) = 0]$

We can unpack these as well to yield

$$ \mu_1 (P_{001} x_0^2 + (P_{011}+P_{101}) x_0x_1 + P_{111} x_1^2 -k) = 0 \\ \mu_2 (P_{002} x_0^2 + (P_{012} +P_{102}) x_0x_1 + P_{112} x_1^2 -k)= 0 $$

combining these four, together:

$$ \begin{bmatrix} 2A_{00}x_0 + (A_{01} + A_{01})x_1 = \mu_1 (2P_{001}x_0 + (P_{011} + P_{011})x_1) + \mu_2 (2P_{002}x_0 + (P_{012} + P_{012})x_1) \\ 2A_{11}x_1 + (A_{01} + A_{01})x_0 = \mu_1 (2P_{112}x_1 + (P_{011} + P_{012})x_0) + \mu_2 (2P_{112}x_1 + (P_{012} + P_{012})x_0)\\ \mu_1 (P_{001} x_0^2 + (P_{011}+P_{101}) x_0x_1 + P_{111} x_1^2 -k) = 0 \\ \mu_2 (P_{002} x_0^2 + (P_{012} +P_{102}) x_0x_1 + P_{112} x_1^2 -k)= 0 \end{bmatrix} $$

We have 4 equations, and 4 unknowns $\mu_0, \mu_1, x_0, x_1$. This can now be algebraically solved for 36 possible combinations of $\mu_0, \mu_1, x_0, x_1$ select the one that maximizes your function.

Related Question