[Math] Proving optimal solution for Linear Programming

convex-analysislinear algebralinear programmingoptimizationreal-analysis

Suppose we have a standard optimization problem. $A'$ is an optimal solution to the problem. If we add a constraint to our original optimization problem, and $A'$ satisfies the new constraint, then is $A'$ still optimal for the new optimization problem? If so, prove it.

My solution:
Yes, $A'$ is still optimal for the new optimization problem. By contradiction, suppose it wasn't optimal for the new problem.

WLOG, assume the original problem was a minimization one, we have for the original problem for all $x$ in the original feasible region, $O$, $c' A'<c'x$. Since $A'$ is not optimal for the new problem, there exists some $x_1$ in the new feasible region, $N$, such that $c'x_1<c'A'$. However, since we just added a constraint and kept all the previous constraints, the new feasible region must be a subset of the original feasible region. Thus $x_1 \in$ O. But, this means that $A'$ is not optimal for the original problem, since $x_1$ is in the original feasible region, $O$ but $c'x_1<c'A'$. Thus, we have reached a contradiction.

Is this proof correct? Or is it incorrect or perhaps not rigorous enough? For reference the knowledge we have at our disposal is Bertsimas: Linear Optimization Chapter 1. Thanks.

Best Answer

Yes, you have gotten the main idea if the standard optimization problem is a linear one.

Minor comment if the problem is restricted to linear programming:

If $A'$ is optimal, then we can write $$c'A' \color{blue}\le c'x.$$

Suppose not, let $f$ be the objective function and rather than $c'x$, write $f(x)$.

Related Question