I would say yes to b). To be sure:
If the objective function $f$ is linear and not identically $0$, and f $x_1$ and $x_2$ are two local maxima, then, by linearity, $$f(\lambda x_1 + (1-\lambda)x_2)= \lambda f(x_1) + (1-\lambda)f(x_2).$$ This means that $f(x_1)=f(x_2)$ by the following argument: suppose that they are not equal, for example $f(x_1)>f(x_2)$. Then $\varepsilon f(x_1)+(1-\varepsilon)f(x_2)>f(x_2)$, so you can find points $\epsilon x_1+(1-\varepsilon)x_2$ which are arbitrarily close to $x_2$ and whose values under $f$ are greater than $f(x_2),$ contradicting the assumption that $x_2$ is a local maximum.
This means that whenever there are two local maximums in this setting, then all values in between the segment joining the two are local maximums too.
However, since all the nonbasic variables' reduced costs are negative, then there are no more local maximums in a neighborhood of your feasible solution $x$. This immediately rules out other solutions (because, by the argument above, they would imply the existence of local maxima in any neighborhood of $x$).
Edit: Answer for b). My previous argument doesn't consider the case of a degenerate optimal solution. In that case, you could have two optimal bases. This would answer no to the first question. Take for example the minimization problem
\begin{alignat*}{2}
\min\, & x_1+x_2 \\
\text{s. t. } & x_1+x_2 & \ge 0 \\
& x_1+x_2 & \le 1 \\
& x_1,x_2\ge 0.
\end{alignat*}
It is clearly degenerate in $(x_1,x_2)=(0,0)$, where the first constraint is redundant. If you put it in standard form you get
\begin{alignat*}{2}
\min \, & x_1+x_2 \\
\text{s. t. } & -x_1-x_2+s_1 & = 0 \\
& x_1+x_2 +s_2& = 1 \\
& x_1,x_2,s_1,s_2\ge 0.
\end{alignat*}
The optimal solution is $(x_1,x_2,s_1,s_2)=(0,0,0,1)$. You then have the optimal basis $\{s_1,s_2\}$, with strictly positive reduced costs $(1,1)$ for $x_1$ and $x_2$, respectively. However, the basis choice of $\{x_2,s_2\}$ gives the reduced costs of $(0,1)$ for $x_1$ and $s_1$, respectively, which means that this basis is optimal too.
It's possible to have a degenerate optimal solution, represented in a non-optimal basis, where all reduced costs are nonzero (some positive, some negative) but the optimal solution is not unique.
The terrible circumstances that make this happen are that in a linear program with some degenerate corner points, we might be at an optimal solution and not know it. For example, suppose that we have the problem
\begin{array}{rr}
\text{maximize} & x+y \\
\text{s.t} & x+y \le 1 \\
& y \le 1 \\
& x,y \ge 0
\end{array}
Let's add slack variables as usual, writing the constraints as $x+y+w_1 = 1$ and $y + w_2 = 1$.
There are three choices of basic variables that describe an optimal solution:
- Make $x$ and $w_2$ basic. This is the point $(x, y, w_1, w_2) = (1,0,0,1)$ with objective value $1$. In terms of the nonbasic variables, the objective function $x+y$ can be written as $1 - 0y - w_1$. This is business as usual: the reduced costs tell us that the solution is optimal, but the reduced cost of $0$ tell us that the optimal solution might not be unique.
- Make $y$ and $w_2$ basic. This is the point $(x, y, w_1, w_2) = (0, 1, 0, 0)$ with objective value $1$. In terms of the nonbasic variables, the objective function $x+y$ can be written as $1 - 0x - w_1$. Again, the reduced costs tell us that the solution is optimal, but the reduced cost of $0$ tell us that the optimal solution might not be unique.
- Make $y$ and $w_1$ basic. This is still the point $(x, y, w_1, w_2) = (0, 1, 0, 0)$ with objective value $1$; the same as the previous optimal solution! However, the objective function is $1 + x - w_2$ in terms of the nonbasic variables: there are both positive and negative reduced costs.
Choice 3 is an optimal solution, but the basis is not an optimal basis! If that's your situation in the simplex method, then you don't even know that the optimal solution you've reached is optimal. The optimal solution may or may not be unique.
(On the other hand, as you've already seen, if we have an optimal solution represented in an optimal basis, then we know it's unique. In that case, the reduced costs are all nonnegative; if they're nonzero, then we know that they're positive. This can be used to show uniqueness.)
Best Answer
The only situation this would be true if there exists a constraint that causes degeneracy in a model where one of the basic variables has a zero as its right-hand-side value, and thus doesn’t contribute to a model upon a pivot. For example:
Suppose we have a model:
$$\text{min }z=-x_1+x_2-x_3$$
Subject to,
$$x_1+x_2\le 4$$ $$-x_2+x_3\le0$$ $$x_1,x_2,x_3 \ge0$$
Converting this to standard form, we get:
$$\text{min } z +x_1 - x_2 +x_3 = 0$$
Subject to: $$x_1+x_2+s_1=4$$ $$-x_2+x_3+s_2=0$$ $$x_1, x_2, x_3, s_1, s_2 \ge0$$
From here, lets put this in a tableau:
Let’s pivot the $x_1$ column to produce:
Then lets pivot the $x_3$ column to produce our final tableau:
Notice that the solution produced in the second tableau is optimal, $(4,0,0)$, and is exactly the same as the solution produced by the third tableau. In addition, the second tableau produced an optimal solution, but had a $C^\pi_j > 0$, which shows that the simplex method doesn’t terminate right away.
Here’s a PowerPoint slide I found that explains more on Degeneracy in models.