Assuming that we are working with a minimization objective function and we have identified the best basic feasible solution, i.e. the basic feasible solution that yields the most minimal objective value, when is this best basic feasible solution NOT the optimal solution to the LP? Are there any conditions in which this wouldn't be?
[Math] Best basic feasible solution NOT optimal solution to LP
linear programming
Related Solutions
I would say yes to b). To be sure:
If the objective function $f$ is linear and not identically $0$, and f $x_1$ and $x_2$ are two local maxima, then, by linearity, $$f(\lambda x_1 + (1-\lambda)x_2)= \lambda f(x_1) + (1-\lambda)f(x_2).$$ This means that $f(x_1)=f(x_2)$ by the following argument: suppose that they are not equal, for example $f(x_1)>f(x_2)$. Then $\varepsilon f(x_1)+(1-\varepsilon)f(x_2)>f(x_2)$, so you can find points $\epsilon x_1+(1-\varepsilon)x_2$ which are arbitrarily close to $x_2$ and whose values under $f$ are greater than $f(x_2),$ contradicting the assumption that $x_2$ is a local maximum.
This means that whenever there are two local maximums in this setting, then all values in between the segment joining the two are local maximums too.
However, since all the nonbasic variables' reduced costs are negative, then there are no more local maximums in a neighborhood of your feasible solution $x$. This immediately rules out other solutions (because, by the argument above, they would imply the existence of local maxima in any neighborhood of $x$).
Edit: Answer for b). My previous argument doesn't consider the case of a degenerate optimal solution. In that case, you could have two optimal bases. This would answer no to the first question. Take for example the minimization problem
\begin{alignat*}{2} \min\, & x_1+x_2 \\ \text{s. t. } & x_1+x_2 & \ge 0 \\ & x_1+x_2 & \le 1 \\ & x_1,x_2\ge 0. \end{alignat*} It is clearly degenerate in $(x_1,x_2)=(0,0)$, where the first constraint is redundant. If you put it in standard form you get
\begin{alignat*}{2} \min \, & x_1+x_2 \\ \text{s. t. } & -x_1-x_2+s_1 & = 0 \\ & x_1+x_2 +s_2& = 1 \\ & x_1,x_2,s_1,s_2\ge 0. \end{alignat*} The optimal solution is $(x_1,x_2,s_1,s_2)=(0,0,0,1)$. You then have the optimal basis $\{s_1,s_2\}$, with strictly positive reduced costs $(1,1)$ for $x_1$ and $x_2$, respectively. However, the basis choice of $\{x_2,s_2\}$ gives the reduced costs of $(0,1)$ for $x_1$ and $s_1$, respectively, which means that this basis is optimal too.
If the Primal problem is feasible, but unbounded in the direction of optimisation, then the dual has no feasible solution. Otherwise, if the Primal problem has an optimal solution, then the dual has also an optimal solution.
So the answer for your question is that feasibility of the Primal problem does not imply optimality for the Dual problem. It just excludes the possibility that the Dual will be unbounded in the direction of optimisation.
Best Answer
The best basic feasible is optimal in the sense that there are no other solutions with a better objective. (There may be other (non-basic) solutions with the same optimal objective value).
This is of course exploited in the Simplex method: we can limit the search to basic feasible solutions only.