Solving Quadratic Programming Problem using Linear Programming Solver

linear programmingnumerical optimizationoptimizationquadratic programmingsimplex method

I have a qudratic programming problem

$$J_{min} : \frac {1}{2}x^TQx + c^Tx \\ S.T \\
Ax \leq b \\
x \geq 0$$

But I have only a solver for linear programming using simplex method.

$$J_{max} : c^Tx \\ S.T \\ Ax \leq b \\
x \geq 0$$

I have heard that if I change simplex method so it will minimize $J_{max}$ instead of maximize it. That is called the Dual method. It's is an easy change with some transposes of the objective function.

But if I combine the Dual method with KKT conditions. Then I can solve qudratic programming problems?

If yes: How can I rewrite KKT conditions so they work with the Dual method?

If no: What have I missed? I know that it's possible to use linear programming to solve for qudratic programming.

Best Answer

Still me. It is possible to solve the quadratic programming with simplex method.

Method 1

The Lagrangian function of original problem is \begin{equation} L(x; \lambda_1, \lambda2) = \frac{1}{2}x^TQx + c^Tx + \lambda_1^T(Ax-b) - \lambda_2^T x, \quad \lambda_1, \lambda_2 \ge 0. \end{equation} And the KKT conditions are \begin{equation} \begin{array}{c} {\nabla_x L(x;\lambda_1,\lambda_2) = Qx+c+A\lambda_1-\lambda_2 = 0.} \\ {Ax-b \leq 0, \quad x \ge 0.} \\ {\lambda_1 \ge 0, \quad \lambda_2 \ge 0.} \\ {\lambda_1^T(Ax-b)=0, \quad \lambda_2^T x= 0.} \end{array} \end{equation} Except the last complementary slack condition, All of them are linear. We temperately drop this condition, and add one slack variable $y$, then can obtain \begin{equation} \begin{array}{c} {Qx+c+A\lambda_1-\lambda_2 = 0} \\ {Ax-b+y = 0} \\ {x \ge 0, \lambda_1 \ge 0, \lambda_2 \ge 0, y \ge 0} \end{array} \end{equation} And we can use simplex method to search feasible solution of the above constraints. However, to satisfied the complementary slack condition, we must take one of ${\lambda_1}_i$ and $y_i$ as zero. Similarly, we also must take one of ${\lambda_2}_i$ and $x_i$ as zero. In other words, we need to keep ${\lambda_1}_i$ and $y_i$, ${\lambda_2}_i$ and $x_i$ could not be basis variables at the same time.

Method 2

But now I guess what you want is another method which is called Frank-Wolfe algorithm. For the iterative point $x_k$ at $k$ step, we can compute the gradient \begin{equation} \nabla_x J(x_k) = Qx_k + c \end{equation} And we try to solve the following approximated problem: \begin{equation} \begin{array}{cl} {\min_{x}} & J(x_k)+ \nabla_{x}J(x_k)^T(x-x_k) \\ {} & {Ax -b \leq 0} \\ {} & {x \ge 0} \end{array} \end{equation} This is a linear programming in terms of $x$, and you can solve it directly by the Simplex method.

Related Question