[Math] Analog of Simplex Method for Quadratic Programming

convex optimizationnumerical optimizationoptimizationquadratic programming

It happened so I need to work with an algorithm for solving QP problems. The main issue here is that I can't find any references to this algorithms (at least no references in sources in english). There is something on Wikipedia, but rather obscure.
I was able to find a couple of books where this algorithm is refered to as 'Sigular Points Method' (exact translation from Russian to English), but I want to know if there is globally accepted name for this algorithm, and maybe existing implementations in Matlab, Maple or SciPy.

The algorithm looks like some kind of 'Simplex Method', but for QP problems, and it always converges in finite number of steps.

Brief algorithm description:

We have the following QP problem:

\begin{equation}
f(x) \to \min, \quad x \in D,
\tag{1}
\end{equation}

\begin{equation}
D = \{ x \in R^n \, | \, Ax \leq b \},
\tag{2}
\end{equation}

where
$$
f: R^n \to R, \quad f(x) = \frac{1}{2} \langle Cx, x\rangle + \langle c, x\rangle
$$

$C \in R(n, n)$ is symmetric positive semidefinite matrix,
$c \in R^n$, $A \in R(m, n)$, $b \in R^m$.

The notation $\langle a, b\rangle $ stands for dot-product, $R(m, n)$ is a set of all $(m \times n)$-dimensional matrices.

Next, let's define an analog of simplex vertices we iterate over in 'Simplex Method'.

Definition 1. Point $\hat x$ is called singular for the problem (1), (2), if exists a set $I \subset \{ 1, \ldots, m \} $, such as $\hat x$ is a solution for the following problem:

\begin{equation}
f(x) \to min, \quad x \in D_I,
\tag{3}
\end{equation}

\begin{equation}
D_I = \{ x \in R^n \, | \, \langle a_i, x\rangle = b_i, \, i \in I \}
\tag{4}
\end{equation}

(cases when $I = \emptyset$ and $I = \{ 1, \ldots, m \}$ are both included).

Ok, now let's formulate a theorem that will shed some light on the actual algorithm.

Theorem 1. For every symmetric positive semidefinite matrix $ C \in R(n, n) $, and every $c \in R^n$, $A \in R(m, n)$ and $b \in R^m$ there is only a finite set of singular points for problem (1), (2), and solution of this problem is its singular point.

At this step it is clear that we could iterate over singular points, and pick the one where the value of the objective function is minimal, and that one singular point will be the solution.

Of course the iteration process is rather complex. It builds a sequence of singular points $x^1, x^2, \ldots, x^k$ such that $ f( x^{i+1} ) < f(x^i) $. Conjugate gradien method with some modifications for semidefinite matrices could be used to solve the problem (3), (4) (for quadratic functions it converges in finite number of steps).

I could provide even more details, but I'm pretty sure that at this point the algorithm could be identified.

Edit:

The main idea of algorithm:

Suppose, we have some point $x_0 \in D$.

  1. Select the constraints from (2) that is satisfied as an equations for $x_0$. This constraints define some face $D_0$ of a polyhedron described by (2).

  2. Find $\hat x_0 = \min f(x), \, x \in D_0$.

$\hat x_0$ is either the solution for the problem (1), (2) or it is possible to switch to another face, and start with step 1).

By the way, there is no non-negative constraints on problem variables

Best Answer

I'm not sure if this is what you ate thinking of, but there is Wolfe's method, which is an extended simplex method. A description can be found here and in Winston, Operations Research: Applications and Algorithms.

Related Question