[Math] Lagrangian multiplier finds local optima for non-convex function

lagrange multiplieroptimization

As said in here, Lagrangian multiplier is a strategy for finding the local maxima and minima of a function subject to equality constraints.

I'm trying to figure out how this method finds the "local maxima and minima" of a function, regardless that it's convex or non-convex.

The basic idea of Lagrangian multiplier is that, given the contours of objective function and constraint function, at the point where both contours tangentially touches each other, the gradients of both could be aligned.

So if the objective function is convex, the solution is global optima, but I'm just not sure when it's non-convex, the solution would be "local optima"? I presume it's because, this method could only discover the solution at the tangent point, but not the point where the two contours intersect, is that right?

Best Answer

None of the other answers discusses the effects of constraints. They simply discuss what can happen when the derivative of the objective is 0. The question is specifically about Lagrange's method. For solving $\min\{f(x) : g(x)=0\}$, Lagrange pointed out that the following condition is necessary for optimality: $$\nabla f(x) + \lambda \nabla g(x)=0, \quad g(x)=0.$$

The condition is not a sufficient condition, even if the objective is convex. Take for example $\min\{x^2 : \cos(x) = 0\}$. Although $x^2$ is convex, any $x=0.5\pi + k \pi$, $k\in\mathbb Z$ satisfies the Lagrange equations (with $\lambda=2x$ or $\lambda=-2x$). Lagrange's condition is sufficient if the objective to be minimized is convex and the equality constraint is affine.

Problems with more than one constraint (i.e., when $g(x) \in \mathbb R^m$ with $m>1$) are harder because Lagrange's method no longer offers a necessary condition. However, if the gradients of the equality constraints are linearly independent at the optimum or if the constraints are affine, Langrange's method is a necessary condition (source).