[Math] Projected gradient descent for non-convex optimization problems

non-convex-optimizationoc.optimization-and-control

My question is in regards to the minimization of a convex function where the feasible set of solutions is non-convex. Can projected gradient descent (PGD) be used here to obtain a stationary solution?

By PGD, I am referring to the process of stepping in the negative direction of the gradient and then projecting the current solution unto the feasible set. Let's assume that the projector unto the non-convex set exists and is unique.

Best Answer

First, I'm assuming that your nonconvex feasible set $D$ is a subset of some larger convex set $C$ on which the objective function $f(x)$ is defined. It doesn't really make sense to talk about $f(x)$ being convex without its being defined on some convex set.

Also, to specify the projected (sub)gradient algorithm more precisely, you have to give a step size selection rule. For example, you might use

$ x_{k+1} = x_{k}= -\alpha_{k} \nabla f(x_{k}) $

where $\alpha_{k}=1/k$.

There are proofs of convergence for the projected subgradient descent method on convex feasible sets for various step size rules.

In general this doesn't work on a nonconvex feasible set. The proof of convergence on a convex feasible set constructs a sequence of solutions which converges to an optimal solution, Although there certainly will be such a sequence in $C$, it might not exist in $D$.

For example, let

$ f(x)=|x| $.

Let $D$ be the set $\{ -1,2 \}$. If we start at $x_1=2$, then the projected subgradient algorithm with the step size rule above produces the sequence $x_{2}=2$, $x_{3}=2$, $\ldots$, which doesn't converge to the the minimum at $x=-1$.

Related Question