According to wikipedia, "the necessary conditions are sufficient for optimality if the objective function of a maximization problem is a concave function, the inequality constraints are continuously differentiable convex functions and the equality constraints are affine functions". What about the case for minimization problem?
The sufficient condition for optimality of minimization problem
karush kuhn tuckermaxima-minimaoptimization
Related Solutions
The primal problem is \begin{align} \operatorname{minimize}_x & \quad f_0(x) \\ \text{subject to} & \quad f_i(x) \leq 0 \quad \text{for } i = 1,\ldots, m\\ & \quad Ax = b. \end{align} The functions $f_i, i = 0,\ldots,m$ are differentiable and convex.
Assume $x^*$ is feasible for the primal problem (so $f_i(x^*) \leq 0$ for $i = 1,\ldots,m$ and $A x^* = b$) and that there exist vectors $\lambda \geq 0$ and $\eta$ such that $$ \tag{$\spadesuit$}\nabla f_0(x^*) + \sum_{i=1}^m \lambda_i \nabla f_i(x^*) + A^T \eta = 0 $$ and $$ \lambda_i f_i(x_i) = 0 \quad \text{for } i = 1,\ldots,m. $$ Because the functions $f_i$ are convex, equation ($\spadesuit$) implies that $x^*$ is a minimizer (with respect to $x$) of the Lagrangian $$ L(x,\lambda,\eta) = f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \eta^T(Ax - b). $$ Thus, if $x$ is feasible for the primal problem, then \begin{align*} f_0(x) & \geq f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \eta^T(Ax - b) \\ & \geq f_0(x^*) + \sum_{i=1}^m \lambda_i f_i(x^*) + \eta^T(Ax^* - b) \\ & = f_0(x^*). \end{align*} This shows that $x^*$ is a minimizer for the primal problem.
Convenient conditions for checking that a certain stationary point is optimal generally require that the objective function is convex. Nonconvex optimization is a much more difficult subject with a lot of special cases for different problems. Two of the many reasons for this are that there can be stationary points that are not local minima and that there can be multiple local minima.
On a simpler level, notice that the Lagrange multipliers condition is a first order condition. You may recall the second order condition from calculus, which checks the sign of the second derivative at the critical point. This condition is sufficient, but not necessary: when the second derivative is positive you have a minimum, when it is negative you have a maximum, and when it is zero you get no information.
This condition generalizes to multivariable problems through the Hessian. Cases:
- The Hessian is positive definite (all eigenvalues are strictly positive) $\Rightarrow$ minimum.
- The Hessian is negative definite (all eigenvalues are strictly negative) $\Rightarrow$ maximum.
- The Hessian is indefinite (there are some strictly positive eigenvalues and some strictly negative eigenvalues) $\Rightarrow$ saddle point. Under normal circumstances this is neither a maximum nor a minimum, because we can follow the eigenvector with a negative eigenvalue to go down and can follow the eigenvector with a positive eigenvalue to go up.
- The Hessian is positive (resp. negative) semidefinite (all eigenvalues are nonnegative (resp. nonpositive) but some are zero). Here you may have a minimum (resp. maximum) but may also have nothing.
Of course, everything from the previous paragraph assumes you have enough smoothness that all of this makes sense, and there are certainly nonsmooth optimization problems. These problems have their own special issues. The first place to look for dealing with these kinds of problems is convex analysis. My knowledge of convex analysis is quite limited, so I will not try to discuss it here.
Best Answer
Exactly the same, except for changing the conditions to "objective function of a minimization problem is a convex function"
Indeed, this is equivalent because the optimal argument values for minimizing a function are the same as those for maximizing the negative of the function; and the negative of a concave function is a convex function.