This is not a full answer, but it is relevant information. P. 30 of Convex Analysis and Nonlinear Optimization by Borwein and Lewis states:
...we need to impose a regularity condition or "constraint
qualification", an approach which is another recurring theme. The
easiest such condition in this context is simply the linear
independence of the gradients of the active constraints $\{ \nabla
g_i(\bar x) \mid i \in I(\bar x) \}$. The culminating result of this
section uses the following weaker condition.
Assumption 2.3.7 (The Mangasarian-Fromovitz constraint qualification) There is a direction $d$ in $E$ satisfying $\langle
\nabla g_i(\bar x), d \rangle < 0$ for all indices $i$ in the active
set $I(\bar x)$.
Later, on p. 45, the book states that if the functions $f$ and $g_1,\ldots,g_m$ are differentiable at $\bar x$ then:
in this case it is easy to see that the Slater condition is equivalent
to the Mangasarian-Fromovitz constraint qualification (Assumption
2.3.7).
Given the primal:
\begin{equation*}
\begin{array}{lll}
\textrm{maximize } & \sum\limits_{j=1}^n c_j x_j&\\
\textrm{subject to} & \sum\limits_{j=1}^n a_{ij} x_j \leq b_i & \textrm{ for } i=1,2\ldots,m\\
& x_j \geq 0 & \textrm{ for } j=1,2\ldots,n
\end{array}
\end{equation*}
The dual is:
\begin{equation*}
\begin{array}{lll}
\textrm{minimize } & \sum\limits_{i=1}^m b_i y_i&\\
\textrm{subject to} & \sum\limits_{i=1}^m a_{ij} y_i \geq c_j & \textrm{ for } j=1,2\ldots,n\\
& y_i \geq 0 & \textrm{ for } i=1,2\ldots,m
\end{array}
\end{equation*}
Now given a solution $x^*$, use the complementary slackness theorem!
Either the primal variable is zero, or the associated dual constraint is tight:
\begin{equation}
{x^*_j = 0 \textrm{ or } \sum_{i=1}^m a_{ij}y^*_i = c_j \textrm{ (or both) for } j=1,2,\ldots,n}
\end{equation}
Either the primal constraint is tight, or the associated dual variable is zero:
\begin{equation} \label{eq:CSC:2}
{\sum_{j=1}^n a_{ij}x^*_j = b_i \textrm{ or } y^*_i = 0 \textrm{ (or both) for } i=1,2,\ldots,m}
\end{equation}
If your solution $x^*$ is basic, you should find yourself with a system of equations with $n$ equations and $n$ unknowns. Solving it will give you a dual solution. If it is feasible on the dual, $x^*$ is optimal.
Best Answer
The issue of nonnegative reduced cost is likely the reason, yes.
It does not take too much work to show (without the simplex method) that if a linear program has an optimal solution, then it has an optimal basic feasible solution. This is still a nontrivial proof, and using the simplex method to prove strong duality sidesteps it.
However, once we have the optimal basic feasible solution, there are two challenges with the reduced costs:
This is not to say that strong duality cannot be proven without the simplex method. The usual alternate route is to use Farkas's lemma: if a system of linear inequalities has no solution, then some linear combination of those inequalities (with nonnegative coefficients, so that the inequalities are preserved) yields a contradiction.
To prove Farkas's lemma, we can use Fourier-Motzkin elimination to try to solve the system of linear inequalities: this is a much slower algorithm than the simplex method, but it is also easier to handle theoretically.