Sequential convex optimization vs Projected gradient descent

convex optimizationnon-convex-optimizationnonlinear optimizationnumerical optimization

$$\textbf{1) Projected Gradient Descent} $$
$$\min_x \space f(x), \text{ subject to } x∈C $$

$$y_k+1=x_k−t_k∇f(x_k)$$
$$x_{k+1}=\operatorname*{argmin}_{x∈C}‖y_{k+1}−x‖$$

$$\textbf {2) Sequential convex programming}$$
Constructs a sequence of convex optimization problems locally, and solves them to obtain a locally optimal solution.

What are the primary disadvantages and advantages of both? I know that generally interior point methods, which is commonly used in SQP are second-order methods. Does this affect the quality and speed of the solution in comparison?

Best Answer

Projected gradient descent is preferred when the constraint set $C$ on which the optimization variables must lie are easy to be projected on. Sequential convex optimization has no such constraint. Example of sets easy to be projected are 1) $ (x| Ax=b)$ 2) non-negative orthant, etc.