[Math] Are all non-convex problems created equal

convex optimizationnonlinear optimizationoptimization

The distinction between convex and non-convex problems is usually dubbed as the distinction between easy and hard problems. While in the convex case you are golden (local optima are global optima; almost every dumb algorithm that you can think of converges; differentiable and non-differentiable cases are pretty much the same, besides the rates of convergence that you can get), in the non-convex case not so much (there are few cases where you can relax to a convex problem and still get the exact solution, but in the general case you have to use approximate methods and/or heuristic methods like alternating minimization, relaxations, genetic algorithms, and so on).

My question is if there are non-convex problems which are easy to solve exactly, i.e., get a global optimum to arbitrary precision. I would like examples besides the best rank $k$ approximation to a given fixed matrix given by computing its SVD and keeping just the largest $k$ singular values. I also aware that there is something about submodular functions, but do not know many details.

Best Answer

When you say "non-convex problems", do you mean specifically non-convex optimization problems? That is what your the tags and the context suggest that is what you mean, but I just want to double-check.

Assuming that is the case, I also have an issue with your use of the phrase "exact solution". It is not going to be possible, even in exact arithmetic, to get the exact solution to most convex programs: the exact optimal value, and/or an exact optimal point. You can, in theory, do this for most linear programs by using the simplex method to identify an optimal basis---but the simplex method has worst-case exponential complexity! In fact, the polynomial algorithms for linear and convex programming are iterative, and approach an exact solution asymptotically. Even in exact arithmetic, you must trade computation time for precision. I am sure what you are referring to here is simply the "global" solution, not the "exact" solution, but I think it was worth making this distinction.

So, let me answer the following revised question: are there broad classes of non-convex optimization problems for which tractable algorithms exist to obtain the global solution to within predefined numerical accuracy, under modest conditions, and assuming exact arithmetic?

I would say the answer is no. The class of convex programs is certainly the most general class of tractable optimization problems that we know of. Yes, there are exceptions; you pointed to (the minimization of) submodular functions, for instance. But you then suggested that's not broad enough to count, and I agree. No such exceptions are even going to approach the breadth of convex programs.

Geometric programs form a class of tractable, non-convex optimization problems. When you consider the larger class of so-called generalized geometric programs (and you should; see the link), it's a relatively extensive class. But in fact, every GP (and GGP) can be mechanically converted to a convex program via a simple change of variables ($x\rightarrow e^y$). So I would argue this is really not a distinct class, but a subset of convex programs "in disguise".

Even SVDs, the other exception you mention, has ties to convex optimization. The maximum singular value is convex in the elements of the matrix, as is the sum of the largest $k$ singular values. So to obtain, say, the $k$th eigenvalue, you could in theory solve two convex programs, one for $\sum_{i=1}^k \sigma_i$ and one for $\sum_{i=1}^{k-1} \sigma_i$, and take the difference. Of course, you'd never do that, just like you'd never use a general purpose convex programming engine to solve a least-squares problem. (And indeed, accuracy would be an issue.) But the fact that you could speaks to the universality of convex optimization.

Related Question