[Math] What’s the Advantage of Difference of Convex Programming Compared to the Gradient Projection Method

convex optimizationconvex-analysisnonlinear optimizationoptimization

Consider the following problem ($P_1$)

$$(P_1)\;\;\; \min_{\mathbf{x}\in\mathbb{R}^n}f_1(\mathbf{x})-f_2(\mathbf{x})\\
s.t. \mathbf{A}\mathbf{x}=\mathbf{b},\\
0\le \mathbf{x}\le 1,
$$
where $f_1(\mathbf{x})$ and $f_2(\mathbf{x})$ are two continuously differentiable and convex functions.

According to Thomas Lipp, Stephen Boyd – Variations and Extensions of the Convex-Concave Procedure, $(P_1)$ can be solved by the DC (difference-of-convex) programming to obtain a stationary point. However, ($P_1$) can also be solved by the gradient projection method (GPM).

My question is: when solving $(P_1)$, is DC more efficient than GPM? What's the advantages of the DC, comparing with GPM?

Best Answer

The CCP procedure can be applied to a DC programming problem in cases where the convex functions are non-smooth.

Gradient descent can't be applied to DC programming problems in cases where the convex functions are non-smooth because $f_{1}(x)-f_{2}(x)$ won't generally be smooth.