If your problem is a multiobjective optimization problem with constraints, and both the objectives and/or constraints are nonlinear/ non convex in nature then an appropriate method of choice is evolutionary multiobjective optimization method. Click here for the list of reference and methods that can be used for your problem.
In terms of software,
- I'm familiar with Global optimization toolbox in Matlab has a
multiobjective evolutionary solver than can handle linear
constraints.
- $R$ has an excellent package called MCO that is multiobjective optimization solver that handles both linear and nonlinear constraints. I have had excellent results using this package.
Both the aforementioned software implements Deb's a very popular NSGAII algorithm.
Please tell us if you succeed in using these for your problem and if you have any questions.
For black-box optimization, most state of the art approaches currently use some form of surrogate modeling, also known as model-based optimization. This is where the objective function is locally approximated via some parametric model (e.g. linear/quadratic response surface or Gaussian process regression). This approach is "derivative free optimization" (DFO) in the sense that only the parametric model derivatives are used, vs. finite differences of the black-box function. Note that the local models are typically fitted in a "large" neighborhood, compared to finite differences.
A good comparison of the current state of the art in unconstrained (and box-constrained) DFO methods is the recent paper
Rios, L. M., & Sahinidis, N. V. (2013) Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization.
This study benchmarks various DFO methods for global and local optimization. (See my answer here for further discussion, including limits on problem size.)
For constrained optimization there is less work, but several generalizations exist for the polynomial response-surface DFO methods, at least. These methods tend to be formulated as sequential linear programming or sequential quadratic programming (SQP) algorithms$\mathbf{^*}$, depending on the degree of the polynomial approximation. Note that cubic or higher order polynomials are generally not used, due to ill conditioning.
I have not used these programs, but some possible recommendations might be:
- Linear Case: The COBYLA algorithm by M.J.D. Powell.
- Quadratic Case: CONDOR (based on an unconstrained SQP algorithm by Powell)
I have used Powell's box-constrained SQP solver (BOBYQA) with good results, and his software/algorithms are very reliable in my experience, as they were honed by years of practical industrial applications. Hence my recommendation. (His quadratic DFO variants also rank very well in the benchmark study cited above.)
$\mathbf{^*}$Penalty methods are a viable alternative for many constrained optimization problems, as suggested in the answer by Ami Tavory (since deleted). If you go this route, I would not recommend using Nelder-Mead. Many better alternatives exist, as outlined in the Rios & Sahinidis study.
Best Answer
The most recent issue of The R Journal contains Differential Evolution with DEoptim, which illustrates how to use DEoptim for portfolio optimization.