[Math] Proof of 100 % rule in Linear Programming

linear programmingsimplex

I am a beginner student in Linear programming. I got introduced to 100 % rule, where we can comment whether the basis is going to change for the optimal solution, when there is a simultaneous increase/decrease in RHS of the constraints or the coefficients in the objective function.

Can someone hint at how to prove this 100% rule?

Edit: I found a pdf containing the formal proof on Pg 19. On reading this pdf, What I can understand is that the variations weighted solution is a feasible solution. Why should it be the optimal – I cannot figure it out.

Best Answer

When the rhs is changed , we have 2 satate : 1- all constrains whose rhs are being changed are nonbinding constrain so the current basis remain optimal off each rhs remains within allowable range, so the value of decision variable and z remain unchanged.

2- at least one constrain whose rhs is being binding, the 100% rule is helpful .

$b_j= $current rhs of jth constrain

$\Delta b_j change b_j$

$I_j $allowable increase for constrain j

$D_j $allowable decrease for constrain j

$r_j=\frac{\Delta b_j}{I_j} $when $\Delta b_j>=0$

$r_j=\frac{-\Delta b_j}{D_j} $when $\Delta b_j <=0$

The current basis remain optimal iff $\Sigma r_j<=1$