Linear programming: the set of gradients of the objective function for which a given extreme point of the feasible region is optimal

convex-analysisconvex-hullslinear algebralinear programming

Consider the linear program,

$$
\begin{array}{ll}
\text { Maximize } & \mathbf{c}^{\mathrm{T}} \mathbf{x} \\
\text { subject to } & A \mathbf{x} \leq \mathbf{b} \\
\text { and } & \mathbf{x} \geq \mathbf{0}
\end{array}
$$

As illustrated in the simple 2-dimensional picture here, the set of gradients $c$ for which a given extreme point $X$ of the feasible region is an optimal solution seems to be everything "in between" the slopes of the two lines of the boundary which intersect at $X$ (i.e. the two constraints which hold with equality at $X$).

Extending the same logic, it is my understanding that for a general $n$-dimensional linear program, the set of gradients of the linear objective function $c$ for which a given extreme point $X$ of the feasible region is a solution, is the convex hull of the gradients $A$ of all the hyperplanes representing the constraints which hold with equality at $X$. Is this correct? Even if yes, we would need to do some kind of normalization of the gradients (say, normalize the coefficient of one of the variables to 1) to be able to take convex combinations. If we do that by normalizing the coefficient of one of the variables to 1, what do we do for constraints which do not contain that variable?

Best Answer

If $X$ is an extreme point of the feasible region, necessary and sufficient for $X$ to be an optimal solution is that there does not exist any vector $\bf v$ with ${\bf a}^T {\bf v} \le 0$ for all gradients $\bf a$ of constraints which are equalities at $X$ and ${\bf c}^T {\bf v} > 0$.
Obviously there can't be such a $\bf v$ if $\bf c$ is a linear combination with nonnegative coefficients of those gradients $\bf a$, i.e. is a conical combination of those gradients. Conversely, if $\bf c$ is not a conical combination of those gradients, such a $\bf v$ exists by the Separation Theorem.