Linear Programming – Primal and Dual Solution

linear programming

Lets say we are given a primal linear programming problem:
$\begin{array}{ccc}
\text{minimize } & c^{T}x & &\\
\text{subject to: } & Ax & \ge & b \\
& x & \ge & 0
\end{array}$

The dual problem is defined as:
$\begin{array}{ccc}
\text{maximize } & b^{T}y & &\\
\text{subject to: } & A^{T}y & \le & c \\
& y & \ge & 0
\end{array}$

According to the duality theorem
$c^{T}x \ge b^{T}y$ for every feasible solution $x$ and $y$, and in addition when $x$ and $y$ are optimal solutions to the primal and the dual task then $c^{T}x=b^{T}y$

So if we define linear programming task with following constraints:
$\begin{array}{ccc}
Ax & \ge & b \\
x & \ge & 0 \\
A^{T}y & \le & c \\
y & \ge & 0 \\
b^{T}y & \ge & c^{T}x
\end{array}$
Then any feasible solution to this task should be an optimal solution to the primal and the dual task, because the last constraint might be satisfied only if $x$ and $y$ are optimal.

The question is why this approach is not used?

I see three potential reasons:
1) I've made somewhere mistake and it doesn't make any sense.
2) It is often the case when primal or dual problem is infeasible. I've seen such examples, but in all of them the optimal solution was unbounded, is it the only case when exactly one of the primal and dual problem is infeasible?
3) Finding any feasible solution might be hard. The so called Phase 1 of simplex method can be used to find a feasible solution. I couldn't find the complexity of this phase, is it exponential just like the simplex algorithm? The other question is what is the fastest method to determine whether there exist any feasible solution? This solution doesn't have to be found.

Best Answer

There are three things that have to be satisfied in order for a solution to a linear programming problem to be optimal:

  1. The primal solution must be feasible.
  2. The dual solution must be feasible.
  3. Complementary slackness must be satisfied. (Remember that primal variables are paired with dual slack variables and dual variables are paired with primal slack variables. Complementary slackness is the requirement that, for each of these pairs, at least one variable must be zero.)

The primal simplex method (after Phase I) keeps (1) and (3) always true and searches for a solution that satisfies (2). The dual simplex method (again, after Phase I), keeps (2) and (3) always true and searches for a solution that satisfies (1).

The approach you are describing (minus the $b^Ty \geq c^T x$ constraint) is used. It's the other option, in which (1) and (2) are always kept true while the algorithm searches for a solution that satisfies (3). As Yuval Filmus indicates, this is called a primal-dual method or the parametric self-dual simplex method. See, for example, Rader's Deterministic Operations Research, pp. 432-440, or Vanderbei's Linear Programming: Foundations and Extensions, pp 119-121. (See also Vanderbei's text for how to find an initial feasible solution to both problems; i.e., Phase I.) The idea dates back at least to George Dantzig, the inventor of the simplex method.

As a side comment, Vanderbei indicates that the parametric self-dual simplex method is more amenable to probabilistic analysis than the other versions of the simplex method.