I would say yes to b). To be sure:
If the objective function $f$ is linear and not identically $0$, and f $x_1$ and $x_2$ are two local maxima, then, by linearity, $$f(\lambda x_1 + (1-\lambda)x_2)= \lambda f(x_1) + (1-\lambda)f(x_2).$$ This means that $f(x_1)=f(x_2)$ by the following argument: suppose that they are not equal, for example $f(x_1)>f(x_2)$. Then $\varepsilon f(x_1)+(1-\varepsilon)f(x_2)>f(x_2)$, so you can find points $\epsilon x_1+(1-\varepsilon)x_2$ which are arbitrarily close to $x_2$ and whose values under $f$ are greater than $f(x_2),$ contradicting the assumption that $x_2$ is a local maximum.
This means that whenever there are two local maximums in this setting, then all values in between the segment joining the two are local maximums too.
However, since all the nonbasic variables' reduced costs are negative, then there are no more local maximums in a neighborhood of your feasible solution $x$. This immediately rules out other solutions (because, by the argument above, they would imply the existence of local maxima in any neighborhood of $x$).
Edit: Answer for b). My previous argument doesn't consider the case of a degenerate optimal solution. In that case, you could have two optimal bases. This would answer no to the first question. Take for example the minimization problem
\begin{alignat*}{2}
\min\, & x_1+x_2 \\
\text{s. t. } & x_1+x_2 & \ge 0 \\
& x_1+x_2 & \le 1 \\
& x_1,x_2\ge 0.
\end{alignat*}
It is clearly degenerate in $(x_1,x_2)=(0,0)$, where the first constraint is redundant. If you put it in standard form you get
\begin{alignat*}{2}
\min \, & x_1+x_2 \\
\text{s. t. } & -x_1-x_2+s_1 & = 0 \\
& x_1+x_2 +s_2& = 1 \\
& x_1,x_2,s_1,s_2\ge 0.
\end{alignat*}
The optimal solution is $(x_1,x_2,s_1,s_2)=(0,0,0,1)$. You then have the optimal basis $\{s_1,s_2\}$, with strictly positive reduced costs $(1,1)$ for $x_1$ and $x_2$, respectively. However, the basis choice of $\{x_2,s_2\}$ gives the reduced costs of $(0,1)$ for $x_1$ and $s_1$, respectively, which means that this basis is optimal too.
If $B \in \mathbb{R}^{mm}$ is a nonsingular submatrix of $A$, then the problem
$$z=\min c^{'}x : Ax=b, x\geq 0\ \ \ \ \ \ (1)$$
can be written as
\begin{align}
&z=\min c^{'}_{B}x_{B}+c^{'}_{N}x_{N}\\
&Bx_B+Nx_N=b\ \ \ \ \ \ \ \ \ \ \ (2)\\
&x_{B} \geq 0, x_{N} \geq 0
\end{align}
where $N \in \mathbb{R}^{m(n-m)}$ is the submatrix of $A$ whose columns are not in $B$ and subscripts for $x_B$, $x_N$, $c_B$, $c_N$ are taken accordingly. Solving the system of equations for $x_B$ we get
\begin{align}
x_B=B^{-1}b - B^{-1}Nx_N\ \ \ \ \ (3)
\end{align}
Equation (3) shows that all the solutions of the equation system depend on $n-m$ parameters (namely the $n-m\ \ x_N$ variables). The particular solution obtained by setting $x_N=\mathbb{0}_{n-m}$ is called basic solution of the equation system. If $x=[x_B,x_N]^{'}=[B^{-1}b,\mathbb{0}_{n-m}]^{'}$ is such that $x_B=B^{-1}b \geq 0$ then $x$ is called basic feasible solution of the LP problem (1) or (2), $x_B$ are the basic variables, $x_N$ are the non basic variables. Note that feasibility condition is necessary for a basic solution to be optimal.
As far as it concerns the optimality condition there are two ways to derive it.
Substituting expression (3) for $x_B$ in the problem (2) we get
\begin{align}
&z=\min c^{'}_{B}B^{-1}b+\mathbb{0}_m^{'}x_B+(c^{'}_{N}-c_B^{'}B^{-1}N)x_{N}\\
& x_{N} \geq 0\
\end{align}
The above problem is called reduced problem, the vector
\begin{align}
\hat{c}^{'}=[\hat{c}_B^{'}, \hat{c}_N^{'}]=[\mathbb{0}^{'}_{m}, c_N^{'}-c_B^{'}B^{-1}N]=c^{'}-c_B^{'}B^{-1}[B\: |\: N]= c^{'}-c_B^{'}B^{-1}A
\end{align}
is the vector of the reduced costs and the constant term in the objective function $c_B^{'}B^{-1}b$ is the value the objective function takes at the current basic feasible solution $[x_B, \mathbb{0}_{n-m}]^{'}$. Now assume we want to perform a pivot operation, that is we want a non basic variable, say $x_{k}$, enters the basis and one of the basic variables, say $ x_h$, leaves the basis. The pivot operation can be performed by setting $x_h=0$ and letting $x_{k}$ assumes the maximum allowed increasing, say $\delta \geq 0$. Then the variation in the objective function will be
\begin{align}
\Delta z = \hat{c}_k \delta
\end{align}
It follows that $\Delta z \geq 0$ if $\hat{c}_k \geq 0$ for all the non basic variables, which is a sufficient condition to say that the current base $B$ is optimal. Note that this condition is also necessary if $[x_B, \mathbb{0}_{n-m}]^{'}$ is non degenerate.
A more elegant way to derive the optimality condition relies on the duality theory. Let P the problem (1) and D its dual
\begin{align}
w=\max y^{'}b\\
y^{'}A \leq c^{'}
\end{align}
The Weak Duality Theorem states that if $x$ and $y$ are feasible solutions for $P$ and $D$, respectively, then
\begin{align}
y^{'}b \leq c^{'}x
\end{align}
It follows that if $x$ and $y$ are feasible solutions for $P$ and $D$, respectively, and $w=y^{'}b = c^{'}x=z$, then $x$ and $y$ are also optimal.
Now let $x=[x_B, \mathbb{0}_{n-m}]^{'}$ a basic feasible solution for P. Set
\begin{align}
y^{'}=c_B^{'}B^{-1}
\end{align}
We get
\begin{align}
w=y^{'}b=c_B^{'}B^{-1}b=c_B^{'}x_B=c^{'}x=z
\end{align}
Therefore $x$ and $y$ are two vectors where the primal and dual functions assume the same value. Then the primal basic feasible solution $x=[x_B, \mathbb{0}_{n-m}]^{'}$ is optimal if the dual solution $y^{'}=c_B^{'}B^{-1}$ is feasible, that is
\begin{align}
c^{'} \geq c_{B}^{'}B^{-1}A
\end{align}
Best Answer
From $y=A_B^{-T}c_B$ you multiply with $A_B^T$ to get $A_B^Ty = c_B$.
The tableau is optimal, so $c_B^T A_B^{-1} A_N - c_N^T \geq 0$, so $y^T A_N \geq c_N^T$.
Combining the two you get $A^Ty \geq c$.