Optimization – Multilinear Optimization

multilinear-algebranonlinear optimizationoptimization

Are there any efficient algorithms to solve, multi-linear objective and multi-linear constraint optimization problems? The multilinear functions are sums of bilinear, trilinear (and so on) terms

\begin{align*}
\min\quad & f_1(x_1,x_2,x_3,…,x_n)\\
\text{s.t.}\quad
&f_2(x_1,x_2,x_3,…,x_n)\leq b\\
&f_3(x_1,x_2,x_3,…,x_n)\leq c\\
&f_i(x_1,x_2,x_3,…,x_n)=a_{i1}x_1x_2…x_n+a_{i2}x_1x_2…x_{n-1}+…+a_{i(n-1)}x_1+a_{in}
\end{align*}

Best Answer

An optimization problem where the objective function and constraints are multilinear (i.e. sums of products, $f(x_1,\ldots,x_n)=\sum_{I\subseteq\{1,\ldots,n\}} a_I \prod_{i\in I} x_j, a_I \in \mathbb{R}$) with respect to the decision variables are difficult to solve, as these problems are non-convex in general and there are no known ways to convexify it. Thus the general problem cannot (in my knowledge) be transformed to a linear program, for instance. However, as demonstrated by the earlier post by user p.s., in some special cases this is possible.

There exists algorithms for solving the general problem. One particular is the Reformulatio-Linearization technique by Hanif Sherali and Cihan Tuncbilek (1992) , which solves the problem as a sequence of linear programs. This method is based on relaxing the product variables. For instance, if there is a product term $x_1 x_2$, then this is replaced by a variable $X_{1,2}$. This new variable is constrained using explicit lower and upper bounds on the variables.

If the original problem would have bounds $0 \leq x_1\leq 1$ and $1 \leq x_2\leq 2$, then one could generate a constraint on the product variable $X_{1,2}$ by noting that $(x_1-0)(x_2-1)\geq0$, which in expanded form yields $x_1 x_2 -x_1\geq0$, where by substituting $x_1 x_2\rightarrow X_{1,2}$ would yield $X_{1,2}-x_1\geq 0$. By using all the bounds of the variables, one can derive more constraints on the product variable, which gives a tighter relaxation of the original problem.

The reformulated problem is linear with respect to the original variables $x_i$ and the product variables $X_.$ and can be solved with a linear programming algorithm. If the solution gained from the relaxation does not yield a feasible solution to the original problem, then this solution can be used for dividing the search space into two by branching on a variable. So continuing the previous example, if at optimum of the relaxation $x_1=0.5$, then a branching could be done by considering two problems that are the same as the original but in the first $0\leq x_1\leq0.5$ and in the second $0.5\leq x_2\leq 1$. Clearly, the optimal solution to the original problem is in either of these branches. Now, because the bounds of the variable $x_1$ has changed, also the constraints on the product variables has changed also, and it can be shown that the product variables will converge towards the product terms when continuing iteration in this fashion.

Although conceptually simple, the RLT method is not that straight forward to implement. Thus, for small problem instances, it may be worthwhile to try some commercial numerical non-linear solver (e.g. BARON) or even the Maximize command in Mathematica that solves the problem symbolically.

References

Sherali, Hanif D., and Cihan H. Tuncbilek. "A global optimization algorithm for polynomial programming problems using a reformulation-linearization technique." Journal of Global Optimization 2.1 (1992): 101-112.

Related Question