Adjacency of Optimal Basic Feasible Solutions in Linear Programming

linear programmingoptimizationpolyhedrasimplex method

I am studying linear optimization from the book Introduction to Linear Optimization by Bertsimas and Tsitsiklis and came up with the following question:

If a standard form problem
$$
\begin{align}
\min&\quad c'x \\
\text{s.t.}&\quad Ax=b \\
&\quad x\ge 0
\end{align}
$$

has exactly two optimal basic feasible solutions $x^*$ and $x^{**}$, must they be adjacent?

Here "adjacent" means that the bases of $x^*$ and $x^{**}$ differ by one element.

Intuitively this is true. If $x^*$ and $x^{**}$ were not adjacent, then perhaps there would be a path from $x^*$ and $x^{**}$ in which all of the intermediate BFS are optimal, which is a contradiction. However, I do not know how to convert this idea to a rigorous proof.

I thought about applying the simplex method with $x^*$ as the initial point and letting it lead to $x^{**}$, but clearly this doesn't work, because $x^*$ is already optimal and the simplex method may terminate directly (if the reduced costs associated with $x^*$ are nonnegative). Any help will be appreciated.

Best Answer

Hint:

Consider the feasible region of this graph: enter image description here

Then, consider the objective function of the model (the green line) on this feasible region: enter image description here

Then, consider this in three dimensions: enter image description here

Where the optimal region is a triangle with each optimal solution $x^*$ having two adjacent $x^{**}$ and $x^{***}$.

Then, in a fourth dimension, there would exists four equally optimal points and so on with higher dimensions. Think about how the Simplex method would handle these, and how this works intuitively with the objective function.

With each additional dimension we add, we're going from a line, to a triangle, to a tetrahedron, and so on.

In other words, a one simplex, a two simplex, a three simplex, a four simplex, and so on.