There are three things that have to be satisfied in order for a solution to a linear programming problem to be optimal:
- The primal solution must be feasible.
- The dual solution must be feasible.
- Complementary slackness must be satisfied. (Remember that primal variables are paired with dual slack variables and dual variables are paired with primal slack variables. Complementary slackness is the requirement that, for each of these pairs, at least one variable must be zero.)
The primal simplex method (after Phase I) keeps (1) and (3) always true and searches for a solution that satisfies (2). The dual simplex method (again, after Phase I), keeps (2) and (3) always true and searches for a solution that satisfies (1).
The approach you are describing (minus the $b^Ty \geq c^T x$ constraint) is used. It's the other option, in which (1) and (2) are always kept true while the algorithm searches for a solution that satisfies (3). As Yuval Filmus indicates, this is called a primal-dual method or the parametric self-dual simplex method. See, for example, Rader's Deterministic Operations Research, pp. 432-440, or Vanderbei's Linear Programming: Foundations and Extensions, pp 119-121. (See also Vanderbei's text for how to find an initial feasible solution to both problems; i.e., Phase I.) The idea dates back at least to George Dantzig, the inventor of the simplex method.
As a side comment, Vanderbei indicates that the parametric self-dual simplex method is more amenable to probabilistic analysis than the other versions of the simplex method.
In extended form the inequality constraints in first case can be written as
$$a_{1,1}x_1+a_{1,2}x_2+...+a_{1,n}x_n\le b_1$$
$$a_{2,1}x_1+a_{2,2}x_2+...+a_{2,n}x_n\le b_2$$
$$...$$
$$a_{m,1}x_1+a_{m,2}x_2+...+a_{m,n}x_n\le b_m$$
Let's introduce some slack variables $x_{n+i}\ge 0$ into the inequality constraints such that
$$a_{1,1}x_1+a_{1,2}x_2+...+a_{1,n}x_n+x_{n+1}= b_1$$
$$a_{2,1}x_1+a_{2,2}x_2+...+a_{2,n}x_n+x_{n+2}= b_2$$
$$...$$
$$a_{m,1}x_1+a_{m,2}x_2+...+a_{m,n}x_n+x_{n+m}= b_m$$
In short form
$$\sum_{j=1}^{n} a_{i,j}x_j+x_{n+i}=b_i\quad for\,i=1,...,m$$
And the first case is transformed into the second one
------EDIT-------
We can rewrite the equality conditions in B such as
$$\sum_{j=1}^{n} a_{i,j}x_j=b_i-x_{n+i}\quad for\,i=1,...,m$$
Since the question states that $x_j\ge 0$ for $j=1,...,m+n$ we can eliminate $x_j$ for $j> n$ by using inequality constraints without loss of generality such that
$$\sum_{j=1}^{n} a_{i,j}x_j\le b_i\quad for\,i=1,...,m$$
Best Answer
When the rhs is changed , we have 2 satate : 1- all constrains whose rhs are being changed are nonbinding constrain so the current basis remain optimal off each rhs remains within allowable range, so the value of decision variable and z remain unchanged.
2- at least one constrain whose rhs is being binding, the 100% rule is helpful .
$b_j= $current rhs of jth constrain
$\Delta b_j change b_j$
$I_j $allowable increase for constrain j
$D_j $allowable decrease for constrain j
$r_j=\frac{\Delta b_j}{I_j} $when $\Delta b_j>=0$
$r_j=\frac{-\Delta b_j}{D_j} $when $\Delta b_j <=0$
The current basis remain optimal iff $\Sigma r_j<=1$