My question comes from Exercise 3.9 of the book Introduction to Linear Optimization by Bertsimas and Tsitsiklis, which is about a charaterization of unique optimum.
Exercise 3.9 Consider a linear programming problem in standard form and suppose that $x^*$ is an optimal basic feasible solution. Consider an optimal basis associated with $x^*$. Let $B$ and $N$ be the set of basic and nonbasic indices, respectively. Let $I$ be the set of nonbasic indices $i$ for which the corresponding reduced costs $\overline{c}_i$ are zero. (a) Show that if $I$ is empty, then $x^*$ is the only optimal solution.
In this exercise, the basis $B$ is assumed to be an optimal basis, hence the reduced costs $\overline{c}'=c'-c_B'B^{-1}A$ are all nonnegative. Under this assumption, the proof is easy: $I=\emptyset$ implies that $\overline{c}_j>0$ for all $j\in N$, hence the uniqueness of optimum by Exercise 3.6.
My question is: What if we only assume that $\overline{c}_j\neq0$ (instead of $\overline{c}_j>0$) for all $j\in N$? That is:
If $x^*$ is an optimal solution with $\overline{c}_j\neq 0$ for all $j\in N$, does the problem still have unique optimum?
I am only able to work out part of the answer:
- If $x^*$ is nondegenerate, then by Theorem 3.1, we have $\overline{c}\ge0$ always, so $x^*$ is the unique optimal solution.
- If $x^*$ is degenerate, then it is possible that $\overline{c}_j<0$ for some $j\in N$. Since $x^*$ is optimal, then the $j$th basic direction is not a feasible direction. That is, the reduced cost associated with every feasible basic direction must be nonnegative. However, I am not sure how to go from this to the uniqueness of optimum. I know that $c'x^*<c'x$ for all of the BFS $x$ adjacent to $x^*$, but is this enough to show that $x^*$ is a strict local (and hence global) minimizer?
Best Answer
It's possible to have a degenerate optimal solution, represented in a non-optimal basis, where all reduced costs are nonzero (some positive, some negative) but the optimal solution is not unique.
The terrible circumstances that make this happen are that in a linear program with some degenerate corner points, we might be at an optimal solution and not know it. For example, suppose that we have the problem
\begin{array}{rr} \text{maximize} & x+y \\ \text{s.t} & x+y \le 1 \\ & y \le 1 \\ & x,y \ge 0 \end{array} Let's add slack variables as usual, writing the constraints as $x+y+w_1 = 1$ and $y + w_2 = 1$.
There are three choices of basic variables that describe an optimal solution:
Choice 3 is an optimal solution, but the basis is not an optimal basis! If that's your situation in the simplex method, then you don't even know that the optimal solution you've reached is optimal. The optimal solution may or may not be unique.
(On the other hand, as you've already seen, if we have an optimal solution represented in an optimal basis, then we know it's unique. In that case, the reduced costs are all nonnegative; if they're nonzero, then we know that they're positive. This can be used to show uniqueness.)