Can a gradient descent method converge to to nonstrict local maxima

gradient descentnumerical optimizationoptimization

In $\star$, the authors prove that 'almost always' gradient method converges to a local minimum provided the objective function has strict saddle properties. Nesterov, in his book, has given an example proving that gradient descent methods can converge to a saddle point. I was wandering for an example of the objective function for which the steepest descent method converges to a nonstrict local maximum. Does there exist one?


$\star $Jason D. Lee, Max Simchowitz, Michael I. Jordan, Benjamin Recht, Gradient descent converges to minimizers, 2016.

Best Answer

If a point have derivative zero, the steepest descent converges to that point with a constant sequence, if it starts at that stationary point.


Counterexample required in the comment section:


Consider the function $x \mapsto (\cos((\cos(x))^2_{+})-1)x$, with $t_{+} = t, $ if $t \geq 0$, and $t_{+}=0$, if $t< 0.$ This function is constant in a countable quantity of closed intervals. The quantity of strict maximum is countable and are in between these disjoint intervals in the negative axis. Now, it's not hard to find a negative staring point, say $x_0$, close to a strict local maximum such that in the next step it ends in one of that intervals where the function is constant.

Related Question