[Math] Feasibility and optimality conditions in linear programming

discrete optimizationlinear programmingoptimization

Would anyone please explain in more details (examples or graphical illustrations) these optimal and feasible conditions for linear programming.

Assume that we have the following linear program:

$$z= \min {c'x : Ax=b, x \geq 0}$$

The basis B is the optimal feasible solution if it satisfies two conditions:

  • Feasibility: $B^{-1}b \geq 0$
  • Optimality: c' $\geq $ $c_{B}'B^{-1}$A.

Many thanks.

Best Answer

If $B \in \mathbb{R}^{mm}$ is a nonsingular submatrix of $A$, then the problem $$z=\min c^{'}x : Ax=b, x\geq 0\ \ \ \ \ \ (1)$$ can be written as \begin{align} &z=\min c^{'}_{B}x_{B}+c^{'}_{N}x_{N}\\ &Bx_B+Nx_N=b\ \ \ \ \ \ \ \ \ \ \ (2)\\ &x_{B} \geq 0, x_{N} \geq 0 \end{align} where $N \in \mathbb{R}^{m(n-m)}$ is the submatrix of $A$ whose columns are not in $B$ and subscripts for $x_B$, $x_N$, $c_B$, $c_N$ are taken accordingly. Solving the system of equations for $x_B$ we get \begin{align} x_B=B^{-1}b - B^{-1}Nx_N\ \ \ \ \ (3) \end{align} Equation (3) shows that all the solutions of the equation system depend on $n-m$ parameters (namely the $n-m\ \ x_N$ variables). The particular solution obtained by setting $x_N=\mathbb{0}_{n-m}$ is called basic solution of the equation system. If $x=[x_B,x_N]^{'}=[B^{-1}b,\mathbb{0}_{n-m}]^{'}$ is such that $x_B=B^{-1}b \geq 0$ then $x$ is called basic feasible solution of the LP problem (1) or (2), $x_B$ are the basic variables, $x_N$ are the non basic variables. Note that feasibility condition is necessary for a basic solution to be optimal. As far as it concerns the optimality condition there are two ways to derive it.

  1. Substituting expression (3) for $x_B$ in the problem (2) we get \begin{align} &z=\min c^{'}_{B}B^{-1}b+\mathbb{0}_m^{'}x_B+(c^{'}_{N}-c_B^{'}B^{-1}N)x_{N}\\ & x_{N} \geq 0\ \end{align} The above problem is called reduced problem, the vector \begin{align} \hat{c}^{'}=[\hat{c}_B^{'}, \hat{c}_N^{'}]=[\mathbb{0}^{'}_{m}, c_N^{'}-c_B^{'}B^{-1}N]=c^{'}-c_B^{'}B^{-1}[B\: |\: N]= c^{'}-c_B^{'}B^{-1}A \end{align} is the vector of the reduced costs and the constant term in the objective function $c_B^{'}B^{-1}b$ is the value the objective function takes at the current basic feasible solution $[x_B, \mathbb{0}_{n-m}]^{'}$. Now assume we want to perform a pivot operation, that is we want a non basic variable, say $x_{k}$, enters the basis and one of the basic variables, say $ x_h$, leaves the basis. The pivot operation can be performed by setting $x_h=0$ and letting $x_{k}$ assumes the maximum allowed increasing, say $\delta \geq 0$. Then the variation in the objective function will be \begin{align} \Delta z = \hat{c}_k \delta \end{align} It follows that $\Delta z \geq 0$ if $\hat{c}_k \geq 0$ for all the non basic variables, which is a sufficient condition to say that the current base $B$ is optimal. Note that this condition is also necessary if $[x_B, \mathbb{0}_{n-m}]^{'}$ is non degenerate.

  2. A more elegant way to derive the optimality condition relies on the duality theory. Let P the problem (1) and D its dual \begin{align} w=\max y^{'}b\\ y^{'}A \leq c^{'} \end{align} The Weak Duality Theorem states that if $x$ and $y$ are feasible solutions for $P$ and $D$, respectively, then \begin{align} y^{'}b \leq c^{'}x \end{align} It follows that if $x$ and $y$ are feasible solutions for $P$ and $D$, respectively, and $w=y^{'}b = c^{'}x=z$, then $x$ and $y$ are also optimal. Now let $x=[x_B, \mathbb{0}_{n-m}]^{'}$ a basic feasible solution for P. Set \begin{align} y^{'}=c_B^{'}B^{-1} \end{align} We get \begin{align} w=y^{'}b=c_B^{'}B^{-1}b=c_B^{'}x_B=c^{'}x=z \end{align} Therefore $x$ and $y$ are two vectors where the primal and dual functions assume the same value. Then the primal basic feasible solution $x=[x_B, \mathbb{0}_{n-m}]^{'}$ is optimal if the dual solution $y^{'}=c_B^{'}B^{-1}$ is feasible, that is \begin{align} c^{'} \geq c_{B}^{'}B^{-1}A \end{align}