Solved – Are machine learning techniques “approximation algorithms”

approximationmachine learningoptimization

Recently there was a ML-like question over on cstheory stackexchange, and I posted an answer recommending Powell's method, gradient descent, genetic algorithms, or other "approximation algorithms". In a comment someone told me these methods were "heuristics" and not "approximation algorithms" and frequently did not come close to the theoretical optimum (because they "frequently get stuck in local minima").

Do others agree with that? Also, it seems to me there is a sense of which heuristic algorithms can be guaranteed to come close to theoretical optimums if they are set up to explore a large part of the search space (eg setting parameters/step sizes small), although I haven't seen that in a paper. Does anyone know if this has been shown or proven in a paper? (if not for a large class of algorithms maybe for a small class say NNs etc.)

Best Answer

I think you're mixing multiple important concepts. Let me try to clarify a couple of things:

  • There are metaheuristic methods, which are methods that iteratively try to improve a candidate solution. Examples of this are tabu search, simulated annealing, genetic algorithms, etc. Observe that while there can be many cases where these methods work nicely, there isn't any deep understanding of when these methods work and when they don't. And more importantly when they don't get to the solution, we can be arbitrarily far from it. Problems solved by metaheuristic methods tend to be discrete in nature, because there are far better tools to handle continuous problems. But every now and then you see metaheuristics for continuous problems, too.

  • There are numerical optimization methods, people in this community carefully examine the nature of the function that is to be optimized and the restrictions of the solution (into groups like convex optimization, quadratic programming, linear programming, etc) and apply algorithms that have been shown to work for that type of function, and those type of restrictions. When people in this area say "shown to work" they mean a proof. The situation is that these types of methods work in continuous problems. But when your problem falls in this category, this is definitely the tool to use.

  • There are discrete optimization methods, which tend to be things that in nature are connected to algorithms to well studied discrete problems: such as shortest paths, max flow, etc. People in this area also care that their algorithms really work (proofs). There are a subset of people in this group that study really hard problems for which no fast algorithm is expected to exist. They then study approximation algorithms, which are fast algorithms for which they are able to show that their solution is within a constant factor of the true optimum. This is called "approximation algorithms". These people also show their results as proofs.

So... to answer your question, I do not think that metaheuristics are approximation algorithms. It doesn't seem to me as something connected to opinion, it is just fact.