[Math] Lagrange multipliers: when is local extremum a global extremum

inequalitylagrange multiplier

Consider the following Olympiad problem from the IMO shortlist:

Let the real numbers $a,b,c,d$ satisfy the relations $a+b+c+d=6$ and $a^2+b^2+c^2+d^2=12.$ Prove that:

$36 \leq 4 \left(a^3+b^3+c^3+d^3\right) – \left(a^4+b^4+c^4+d^4 \right) \leq 48.$


There is an elementary solution, but I tried to do it with Lagrange multipliers (2 constraints). I first showed that if $(a,b,c,d)$ is a critical point then there can be at most $3$ distinct values among $a,b,c,d$. Using this I then showed that the only critical points are (the permutations of) $(1,1,1,3)$, which is gives the local minimum of $36$, and $(2,2,2,0)$, which gives the local maximum of $48$.

However a local extremum is not necessarily are global extremum, so I am not done here.

Is there any way to verify that a local extremum is in fact global? I am asking in general.

Best Answer

In general, if you can show that your optimization problem has continuous and differentiable objective function and constraint functions and satisfies a special technical hypothesis called a constraint qualification (there are many constraint qualifications in the literature) then the Lagrange multiplier condition is a necessary condition for a point $x^{*}$ to be a local minimum (or maximum.)

In other words, if a feasible point $x^{*}$ is a local minimum (maximum), and a constraint qualification holds, then the Lagrange multiplier condition must hold at $x^{*}$.

In this situation, if you can identify all of the points that satisfy the Lagrange multiplier condition then you can simply pick the one with the smallest objective value as the global minimum.

Unfortunately, without a constraint qualification hypothesis, it could be possible that a point is an extreme point (and perhaps a global maximum or minimum) even though it doesn't satisfy the Lagrange multiplier condition! Most discussions of Lagrange multipliers in calculus textbooks gloss over this important technical detail.

One commonly used constraint qualification is the linear independence constraint qualification (LICQ), "the gradients of the constraint functions at $x^{*}$ are linearly independent." The LICQ is satisfied at nearly every feasible point in your problem. You can easily finish this off by taking care of the exception(s).

It's a good exercise to setup a counter example optimization problem in which a point $x^{*}$ is a global minimum, the gradients of the constraints are linearly dependent, and you can't solve for the Lagrange multipliers.

You can find proofs of the Lagrange multiplier necessary condition with the LICQ hypothesis in many textbooks on optimization. More advanced books will include proofs with other constraint qualifications and explore the implications between different constraint qualifications.