[Math] optimal basis and optimal solution

linear programming

Determine which of the following is true:

a) Consider a maximization LP in SEF. Suppose $x$ is a basic feasible solution for which all nonbasic variables have strictly negative reduced costs. Then the LP has only one optimal basis.

b) Consider a maximization LP in SEF. Suppose $x$ is a basic feasible solution for which all nonbasic variables have strictly negative reduced costs. Then the LP has only one optimal solution.

I claim that both a) and b) are true because by the optimality theorem which states that for maximization objective with all reduced costs of non basic variables are nonpositive, then the basic feasible solution determined by a basis is optimal. Is my reasoning valid?

Best Answer

I would say yes to b). To be sure:

If the objective function $f$ is linear and not identically $0$, and f $x_1$ and $x_2$ are two local maxima, then, by linearity, $$f(\lambda x_1 + (1-\lambda)x_2)= \lambda f(x_1) + (1-\lambda)f(x_2).$$ This means that $f(x_1)=f(x_2)$ by the following argument: suppose that they are not equal, for example $f(x_1)>f(x_2)$. Then $\varepsilon f(x_1)+(1-\varepsilon)f(x_2)>f(x_2)$, so you can find points $\epsilon x_1+(1-\varepsilon)x_2$ which are arbitrarily close to $x_2$ and whose values under $f$ are greater than $f(x_2),$ contradicting the assumption that $x_2$ is a local maximum.

This means that whenever there are two local maximums in this setting, then all values in between the segment joining the two are local maximums too.

However, since all the nonbasic variables' reduced costs are negative, then there are no more local maximums in a neighborhood of your feasible solution $x$. This immediately rules out other solutions (because, by the argument above, they would imply the existence of local maxima in any neighborhood of $x$).


Edit: Answer for b). My previous argument doesn't consider the case of a degenerate optimal solution. In that case, you could have two optimal bases. This would answer no to the first question. Take for example the minimization problem

\begin{alignat*}{2} \min\, & x_1+x_2 \\ \text{s. t. } & x_1+x_2 & \ge 0 \\ & x_1+x_2 & \le 1 \\ & x_1,x_2\ge 0. \end{alignat*} It is clearly degenerate in $(x_1,x_2)=(0,0)$, where the first constraint is redundant. If you put it in standard form you get

\begin{alignat*}{2} \min \, & x_1+x_2 \\ \text{s. t. } & -x_1-x_2+s_1 & = 0 \\ & x_1+x_2 +s_2& = 1 \\ & x_1,x_2,s_1,s_2\ge 0. \end{alignat*} The optimal solution is $(x_1,x_2,s_1,s_2)=(0,0,0,1)$. You then have the optimal basis $\{s_1,s_2\}$, with strictly positive reduced costs $(1,1)$ for $x_1$ and $x_2$, respectively. However, the basis choice of $\{x_2,s_2\}$ gives the reduced costs of $(0,1)$ for $x_1$ and $s_1$, respectively, which means that this basis is optimal too.

Related Question