Solved – Are there algorithms and tools that can optimize black box functions with black box constraints

algorithmsoptimization

Suppose that we have an objective function $f$ which have as parameters $x_1, x_2$.

$f$ is an objective function to be maximized for a given problem.Lets say:

$$f(x_1,x_2)=x_1+x_2+E(x_1,x_2)$$

where $E$ is a neural network model. Thus $E$ is not expressed as an explicit mathematical relation.

Let's also assume that the problem have one constraint:

$$G(x_1, x_2) < C$$

where $C$ is a constant and $G$ as for $E$ is modelled using a neural network.

My question is the following:

  1. Is it possible to optimize this problem assuming that we do not know how $F$ and $G$ executes the results?
  2. Are there any tools that can deal with sort of problems?

Best Answer

For black-box optimization, most state of the art approaches currently use some form of surrogate modeling, also known as model-based optimization. This is where the objective function is locally approximated via some parametric model (e.g. linear/quadratic response surface or Gaussian process regression). This approach is "derivative free optimization" (DFO) in the sense that only the parametric model derivatives are used, vs. finite differences of the black-box function. Note that the local models are typically fitted in a "large" neighborhood, compared to finite differences.

A good comparison of the current state of the art in unconstrained (and box-constrained) DFO methods is the recent paper

Rios, L. M., & Sahinidis, N. V. (2013) Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization.

This study benchmarks various DFO methods for global and local optimization. (See my answer here for further discussion, including limits on problem size.)

For constrained optimization there is less work, but several generalizations exist for the polynomial response-surface DFO methods, at least. These methods tend to be formulated as sequential linear programming or sequential quadratic programming (SQP) algorithms$\mathbf{^*}$, depending on the degree of the polynomial approximation. Note that cubic or higher order polynomials are generally not used, due to ill conditioning.

I have not used these programs, but some possible recommendations might be:

  • Linear Case: The COBYLA algorithm by M.J.D. Powell.
  • Quadratic Case: CONDOR (based on an unconstrained SQP algorithm by Powell)

I have used Powell's box-constrained SQP solver (BOBYQA) with good results, and his software/algorithms are very reliable in my experience, as they were honed by years of practical industrial applications. Hence my recommendation. (His quadratic DFO variants also rank very well in the benchmark study cited above.)


$\mathbf{^*}$Penalty methods are a viable alternative for many constrained optimization problems, as suggested in the answer by Ami Tavory (since deleted). If you go this route, I would not recommend using Nelder-Mead. Many better alternatives exist, as outlined in the Rios & Sahinidis study.

Related Question