I would say yes to b). To be sure:
If the objective function $f$ is linear and not identically $0$, and f $x_1$ and $x_2$ are two local maxima, then, by linearity, $$f(\lambda x_1 + (1-\lambda)x_2)= \lambda f(x_1) + (1-\lambda)f(x_2).$$ This means that $f(x_1)=f(x_2)$ by the following argument: suppose that they are not equal, for example $f(x_1)>f(x_2)$. Then $\varepsilon f(x_1)+(1-\varepsilon)f(x_2)>f(x_2)$, so you can find points $\epsilon x_1+(1-\varepsilon)x_2$ which are arbitrarily close to $x_2$ and whose values under $f$ are greater than $f(x_2),$ contradicting the assumption that $x_2$ is a local maximum.
This means that whenever there are two local maximums in this setting, then all values in between the segment joining the two are local maximums too.
However, since all the nonbasic variables' reduced costs are negative, then there are no more local maximums in a neighborhood of your feasible solution $x$. This immediately rules out other solutions (because, by the argument above, they would imply the existence of local maxima in any neighborhood of $x$).
Edit: Answer for b). My previous argument doesn't consider the case of a degenerate optimal solution. In that case, you could have two optimal bases. This would answer no to the first question. Take for example the minimization problem
\begin{alignat*}{2}
\min\, & x_1+x_2 \\
\text{s. t. } & x_1+x_2 & \ge 0 \\
& x_1+x_2 & \le 1 \\
& x_1,x_2\ge 0.
\end{alignat*}
It is clearly degenerate in $(x_1,x_2)=(0,0)$, where the first constraint is redundant. If you put it in standard form you get
\begin{alignat*}{2}
\min \, & x_1+x_2 \\
\text{s. t. } & -x_1-x_2+s_1 & = 0 \\
& x_1+x_2 +s_2& = 1 \\
& x_1,x_2,s_1,s_2\ge 0.
\end{alignat*}
The optimal solution is $(x_1,x_2,s_1,s_2)=(0,0,0,1)$. You then have the optimal basis $\{s_1,s_2\}$, with strictly positive reduced costs $(1,1)$ for $x_1$ and $x_2$, respectively. However, the basis choice of $\{x_2,s_2\}$ gives the reduced costs of $(0,1)$ for $x_1$ and $s_1$, respectively, which means that this basis is optimal too.
Yes to both questions. I first start with a simple example, then ellaborate the definition of degeneracy.
A simple degenerate LPP
To illustrate this problem, let me use this example.
$\max x$ subject to
\begin{align}
\color{blue}{x+y}&\le\color{blue}{1}\\
\color{red}x\phantom{+y}&\le\color{red}1\\
x,y&\ge0
\end{align}
Obviously, exactly one of the blue and red constraints is redundant, so this LPP has degenerate solution.
We transform it to the standard form by adding slack variable $\color{blue}{s_1}, \color{red}{s_2}$.
$\max x$ subject to
\begin{align}
\color{blue}{x+y+s_1\phantom{+s_2}}&=\color{blue}{1}\\
\color{red}{x\phantom{+y+s_1}+s_2}&=\color{red}1\\
x,y,\color{blue}{s_1},\color{red}{s_2}&\ge0
\end{align}
Thus, each of $x=0,y=0,\color{blue}{s_1=0},\color{red}{s_2=0}$ corresponds to a line which bounds the feasible region.
- If we choose $x,\color{blue}{s_1}$ as basic variables, then the basic solution is $x_B=(x,\color{blue}{s_1})=(1,\color{blue}0)$.
- If we choose $x,\color{red}{s_2}$ as basic variables, then the basic solution is $x_B=(x,\color{red}{s_2})=(1,\color{red}0)$.
- If we choose $x,y$ as basic variables, then the basic solution is $x_B=(x,y)=(1,0)$.
Therefore, in the first prompt, the condition (zero-valued slack variable $s_i$ in the basic variable $x_B$) is irrelevant to the degeneracy of $x_B$. The definition of degeneracy still applies to $x_B=(1,\color{blue}0)$ and $x_B=(1,\color{red}0)$.
- Geometric interpretation/intuition: Three lines (constraints) intersect at the point $(1,0)$, so one line is redundant. We use the word degenerate to capture this phenomenon.
- Use the definition of degenerate solution: In each case, we have a zero entry in the basic solution $x_B$, so it should be degenerate.
Theoretical stuff
Suppose $\mathrm{rank}(B)=n$. You have at most $n-1$ variables with non-zero value in the current solution. The degeneracy of a solution just depends on the presence of zero entry in the basic solution $B^{-1}b$. It doesn't matter whether the basic variable is a slack variable or not.
Best Answer
Now that I see the context in which the argument appears, it makes sense.
If ${\bf x}^*$ is an optimal basic feasible solution with objective value $z^*$, and ${\bf x}$ is any other feasible solution with objective value $z$, then (from derivations earlier in the text), they say
$$z^* - z = \sum_{j \in J} (z_j - c_j) x_j,$$
where $J$ is the index set for the variables that are nonbasic for the optimal solution $x^*$ and not necessarily for any other basic solution. For the rest of their argument (and the one below), $J$ retains this interpretation.
The other piece of information they are relying on (and this is the point I think Robert Israel is trying to make in his answer) is that the values of the basic variables for any basic solution are obtained by setting the nonbasic variables to $0$ and then solving the resulting set of linear equations. (The simplex tableau makes this automatic for you, so that you can just read the solutions off of the right-hand side.) The point is that if $x_j = 0$ for all $j \in J$ then ${\bf x}^*$ is the solution you get. Thus if we have a feasible solution ${\bf x}$, distinct from ${\bf x}^*$, then there is at least one $x_j$ for $j \in J$ that is nonzero.
Since all the variables have to be nonnegative in order for ${\bf x}$ to be feasible, $x_j > 0$. With the assumption that $z_j - c_j < 0$ the equation above forces $z^* < z$. Thus any other feasible solution has a strictly larger objective function value and so cannot be optimal.