[Math] Local optimality of a KKT point.

convex optimizationkarush kuhn tuckernonlinear optimizationoptimization

Consider the problem
\begin{equation}
\min_x f(x)~~~{\rm s.t.}~~~ g_i(x)\leq 0,~~i=1,\dots,I,
\end{equation}

where $x$ is the optimization parameter vector, $f(x)$ is the objective function and $g_i(x)$ is the $i$th constraint function. $I$ denotes the number of constraints.

Consider now a point $x^\star$ that satisfies the Karush-Kuhn-Tucker conditions. In general, in non-convex optimization, a KKT point can be everything, i.e., a local optimum, a global optimum, a saddle point or even a maximum.

Can we claim that $x^\star$ is at least a local optimum
if we assume that $f(x)$ is a strictly convex function and that $g_i(x)$ are non-convex functions? All functions are at least twice continuously differentiable.

I have read this claim in this paper: http://kang.nt.e-technik.tu-darmstadt.de/nt/fileadmin/nas/Publications/2012/Cheng_TSP_2012_Posted.pdf [Appendix C] but I'm not 100% sure if this is true. I also could not find any literature about it.

Best Answer

Consider the following problem: \begin{align*} \min \quad & x_1^2 + \frac12 x_2^2, \\ \text{such that} \quad & -x_1^2 - x_2^2 \le -1. \end{align*} The objective is strictly convex, but the constraint is strictly concave.

It is easy to check that $x = (0,1)$ is the global minimizer, and it should also be the only local minimizer. The point $x = (1,0)$ is, however, a KKT point with multiplier $\mu = 1$. But it is not a local minimizer.

Related Question