Now that I see the context in which the argument appears, it makes sense.
If ${\bf x}^*$ is an optimal basic feasible solution with objective value $z^*$, and ${\bf x}$ is any other feasible solution with objective value $z$, then (from derivations earlier in the text), they say
$$z^* - z = \sum_{j \in J} (z_j - c_j) x_j,$$
where $J$ is the index set for the variables that are nonbasic for the optimal solution $x^*$ and not necessarily for any other basic solution. For the rest of their argument (and the one below), $J$ retains this interpretation.
The other piece of information they are relying on (and this is the point I think Robert Israel is trying to make in his answer) is that the values of the basic variables for any basic solution are obtained by setting the nonbasic variables to $0$ and then solving the resulting set of linear equations. (The simplex tableau makes this automatic for you, so that you can just read the solutions off of the right-hand side.) The point is that if $x_j = 0$ for all $j \in J$ then ${\bf x}^*$ is the solution you get. Thus if we have a feasible solution ${\bf x}$, distinct from ${\bf x}^*$, then there is at least one $x_j$ for $j \in J$ that is nonzero.
Since all the variables have to be nonnegative in order for ${\bf x}$ to be feasible, $x_j > 0$. With the assumption that $z_j - c_j < 0$ the equation above forces $z^* < z$. Thus any other feasible solution has a strictly larger objective function value and so cannot be optimal.
I would say yes to b). To be sure:
If the objective function $f$ is linear and not identically $0$, and f $x_1$ and $x_2$ are two local maxima, then, by linearity, $$f(\lambda x_1 + (1-\lambda)x_2)= \lambda f(x_1) + (1-\lambda)f(x_2).$$ This means that $f(x_1)=f(x_2)$ by the following argument: suppose that they are not equal, for example $f(x_1)>f(x_2)$. Then $\varepsilon f(x_1)+(1-\varepsilon)f(x_2)>f(x_2)$, so you can find points $\epsilon x_1+(1-\varepsilon)x_2$ which are arbitrarily close to $x_2$ and whose values under $f$ are greater than $f(x_2),$ contradicting the assumption that $x_2$ is a local maximum.
This means that whenever there are two local maximums in this setting, then all values in between the segment joining the two are local maximums too.
However, since all the nonbasic variables' reduced costs are negative, then there are no more local maximums in a neighborhood of your feasible solution $x$. This immediately rules out other solutions (because, by the argument above, they would imply the existence of local maxima in any neighborhood of $x$).
Edit: Answer for b). My previous argument doesn't consider the case of a degenerate optimal solution. In that case, you could have two optimal bases. This would answer no to the first question. Take for example the minimization problem
\begin{alignat*}{2}
\min\, & x_1+x_2 \\
\text{s. t. } & x_1+x_2 & \ge 0 \\
& x_1+x_2 & \le 1 \\
& x_1,x_2\ge 0.
\end{alignat*}
It is clearly degenerate in $(x_1,x_2)=(0,0)$, where the first constraint is redundant. If you put it in standard form you get
\begin{alignat*}{2}
\min \, & x_1+x_2 \\
\text{s. t. } & -x_1-x_2+s_1 & = 0 \\
& x_1+x_2 +s_2& = 1 \\
& x_1,x_2,s_1,s_2\ge 0.
\end{alignat*}
The optimal solution is $(x_1,x_2,s_1,s_2)=(0,0,0,1)$. You then have the optimal basis $\{s_1,s_2\}$, with strictly positive reduced costs $(1,1)$ for $x_1$ and $x_2$, respectively. However, the basis choice of $\{x_2,s_2\}$ gives the reduced costs of $(0,1)$ for $x_1$ and $s_1$, respectively, which means that this basis is optimal too.
Best Answer
Bertsimas provides some intuition as to what a "feasible direction" is. If you want to remain inside $P$ starting from $x$ in the direction $d$ (given that $x$ is feasible) then you want to make sure that you can find a positive $\theta$ such that $x + \theta d$ is still inside $P$. Graphically, this means that $d$ is a feasible direction (starting from $x$) if you can move at least a little in the direction of $d$ starting from $x$ and stay inside the feasible region $P$.
To answer your questions, you need to look at what Bertsimas states on page 83 carefully. First, recall that for a basic feasible solution in the case that $x\geq 0$, that all the nonbasic variables are zero. Bertsimas wants to move in a particular direction $d$, and he chooses the direction $d$ where $d_j=1$ for exactly one non-basic coordinate corresponding to the nonbasic variable $x_j$ and $d_j$=0 for all other coordinates corresponding to the the remaining non-basic variables. You can think of this as moving along the $x_j$ axis in the positive direction starting from a "non-basic origin" where $x_i=0$ for all non-basic $i$. To visualize this, imagine an example of moving along the $x$-axis from (0,0,0) if you were in $\mathbb{R}^3$ with axes $x$, $y$ and $z$.
By moving in this particular direction, it is very easy to determine the values of the $d$ coordinates corresponding to the basic variables. It is useful to think of $x_B$ and $d_B$ as functions of the non-basic variables. In particular, for his choice of $d$ you end up with a closed-form solution for $d_B$ which he calls the $j$th basic direction.
Bertsimas defines reduced cost in Definition 3.2. Intuitively, $c_j$ is the cost per unit increase in $x_j$ and $-c_B'B^{-1}A_j$ is the cost of the change in the basic variables that arises from enforcing the condition that $Ax=b$. The reduced cost of the non-basic variable $x_j$ is simply $c_j -c_B'B^{-1}A_j$.
Hopefully, this helps.