[Math] Proof of Why Optimal Solutions Occur at Extreme Points

linear programming

I'm taking my first class in Linear Programming. The book I am reading from is good in that it uses a lot of examples, but bad in that it provides few proofs. I need a proof for the following theorem.

Theorem:

If the constraint set $S$ of a canonical maximization or a canonical minimization linear programming problem is bounded, then the maximum or minimum value of the objective function is attained at an extreme point of $S$.

Glossary of Terms:

Definition 1

The problem

Maximize $f(x_1,x_2,\cdots,x_n)=c_1x_1+c_2x_2+\cdots+c_nx_n$

Subject to $a_{11}x_1+a_{12}x_2+\cdots+a_{1n}x_n\leq b_1$

$a_{21}x_1+a_{22}x_2+\cdots+a_{2n}x_n \leq b_2$

$\cdots$

$a_{m1}x_1+a_{m2}x_2+ \cdots + a_{mn}x_n \leq b_n$

$x_1,x_2, \dots, x_n \geq 0$

is said to be a canonical maximization linear programming problem. The definition for minimization is analogous.

Definition 2

Let $x= (x_1,x_2,\cdots ,x_n), y=(y_1,y_2,\cdots ,y_n)\in$ R$^n$. Then $tx+(1-t)y$ for $0\leq t\leq 1$ is said to be the line segment between $x$ and $y$ inclusive.

Definition 3

The set of all points $(x_1,x_2, \cdots, x_n)$ satisfying the constraints of the canonical maximization problem is said to be the constraint set

Definition 4

Let $S$ be a subset of R$^n$. $S$ is said to be convex if, whenever $x$=$(x_1,x_2,\cdots,x_n)$,$y$$=(y_1,y_2,\cdots,y_n)\in S$, then

$tx+(1-t)y \in S$ for $0\leq t \leq 1$

Definition 5

A subset $S$ of R$^n$ is said to be bounded if there exists $r\geq 0$ such that every element of $S$ is contained in the closed ball of radius $r$ centered at the origin. A subset of R$^n$ is unbounded if it is not bounded.

Definition 6

The function $f(x_1,x_2,\cdots,x_n)$ is called the objective function of a canonical linear programming problem.

Definition 7

Any element of the constraint set is said to be a feasible point or feasible solution. Any feasible solution which maximizes/minimizes the objective function is said to be an optimal solution.

My Work

I genuinely have no idea how to do this problem. It wasn't assigned as homework, I just want to understand the reasoning behind it because that will help me do better in my course.

Best Answer

This is due to the fact that the function you are optimizing is linear, and defined on a compact space.

To understand why this is true, consider the mono variable case: $$ \mbox{Max}\quad f(x)=\alpha x \quad \mbox{subject to}\quad a\le x \le b $$

Draw the function and the interval $[a,b]$: it is clear that the optimum is in $x=b$, which is an extreme point of your polygon (a closed interval here).

You can do the same drawing if you have two variables (you will have to draw in 3D) to convince yourself that it is true in this case as well. This is exactly what user Sean English says in his comment above.

Now for a formal proof: since $f$ is continuous and defined on a compact space, it is bounded and attains its maximum and minimum at least once by the extreme value theorem . But since $f$ is linear, it can only attain it once, necessarily on the border of the compact space, namely the extreme points here.

Related Question