[Math] Third and higher order optimality conditions

nonlinear optimizationoptimization

In the derivation of first and second order optimality criteria for a vector $X^*$ to be a local optimum to an unconstrained problem, we ignore the higher order terms of Taylor's expansion as we approach the optimum. However, by not ignoring them, can we deduce third and higher order optimality conditions?

If yes, why isn't there any literature about these conditions? Are they not practical (useful)?

Update: I found this paper but cannot access it.

Best Answer

Usually, one-dimensional optimality conditions of higher order are given in calculus courses.

Loosely speaking, given a function $f\colon (a,b)\to \mathbb{R}$ with vanishing $k-1$ derivatives and non vanishing $k$th derivative at a point, then that point is a maximum/minimum or a saddle depending on the parity of $k$. Say, for example, a function $f$ such that $f(x)=C_0+C_1x^4+O(x^5)$ "looks like" the function $x\mapsto C_1x^4$ near the origin, and so it has there a local maximum or minimum depending on the sign of $C_1$.

In higher dimensions, higher derivatives have the added difficulty of being higher-order linear forms. We can still perform the above analysis, but this is much more complicated than just check a sign. To wit, let us consider the analogue of the above example, that is, a function $f=f(x, y)$ with vanishing derivatives up to third order at the origin. Taylor's formula reads \begin{equation} f(x, y)=\underbrace{\frac{\partial^4f}{\partial x^4}x^4+ \frac{\partial^4f}{\partial x^3 \partial y} x^3y+\dots + \frac{\partial^4f}{\partial y^4} y^4}_{=q_4(x, y)}+O\left( (x^2+y^2)^{\frac{5}{2}}\right). \end{equation} The origin is now a maximum/minimum or a saddle depending on the sign of the quartic form $q_4$. As far as I know, there is no simple algebraic tool that allows us to determine that sign quickly.

Related Question