[Math] Linear optimization constrained by cost function

linear programmingoptimization

Suppose I have an optimization problem of the form:

$$\min \mathbf{d}^{T}\mathbf{y}$$

subject to

$$\mathbf{M}\mathbf{y} \geq \mathbf{d}, \qquad \mathbf{y} \geq 0$$

If a solution $\mathbf{s}$ which satisfies $\mathbf{M}\mathbf{s} = \mathbf{d}$ and $\mathbf{s} \geq 0$, how can it be shown $\mathbf{s}$ is optimal for the above linear programming problem?

Initial thought was to form the dual:

max p'd

p>=0

p'M<=d

Thus weak duality gives
d'y <= p'd

And the constraint of the dual gives p'M<=d

so : p'My<=dy

and thus p'd<=dy

Combining the two equalities shows that p'd must equal dy, and thus with strong duality this means that p'd is the optimal solution.

Where did I go wrong, and how does being symmetric fit into the problem?

Best Answer

You can go this route: You know Ms = d. Taking transposes, you have s$^T$ M$^T$ = d$^T$. But (the key step) since M$^T$ = M, this means that s is feasible for the dual problem. Since s is feasible for the primal and for the dual, and since the primal and dual have the same objective function, weak duality shows you that s must be optimal.

If M$^T \neq $ M then s might not be optimal. For example, suppose we have the following linear program.

$$\min x + y$$

subject to

$$\frac{1}{2}x + y \geq 1,$$ $$\frac{1}{2}x + 2y \geq 1,$$ $$x,y \geq 0.$$

The objective and right-hand side coefficient vectors are the same, as in the problem statement. Solving the inequalities at equality gives you $x = 2$, $y=0$. However, this is not optimal. The solution $x = 0$, $y = 1$ yields a smaller objective function value (and is actually the optimal solution).

Related Question