[Math] Gradient-based interpretation of the simplex algorithm

linear programmingnumerical optimizationoptimization

The simplex algorithm iterates from vertex to vertex of the convex polytope that bounds the feasible region of the constrained optimization problem, such that each iteration of the algorithm moves from vertex to vertex of the polytope and minimizes the objective function.

I am trying to determine a "gradient based" interpretation of the Simplex Algorithm, where given some vertex of the convex polytope defined by the constraints of the optimization problem, the gradient method "points" at the adjacent vertex that most minimizes the objective function.

Does such a gradient method exist? How would I go about deriving it?

Best Answer

The Simplex method is already a "gradient descent" method, in the sense that it follows some descent direction. I wouldn't call it gradient descent, since the gradient does not exist at the basic solutions, but rather a local search or hill descent.

Seen as a local search, the neighbors of the current basic solution are all basic solutions that can be obtained by a single pivot. A pivoting rule will choose one of the improving neighbors. Since linear programming is convex a local minimum will be a global minimum, so any pivoting rule that chooses some improving pivot will lead to an optimal solution.

The pivoting rule that goes into the direction of the gradient is called steepest edge, see e.g. Steepest-edge simplex algorithms for linear programming.

As far as I know, the best neighbor pivoting rule, i.e. choosing the entering variable that leads to the largest improvement of the objective, is not used in practice, because finding that variable is too costly.