[Math] minimization of sum of linear fractional functions

convex optimization

I have the following sum of fractional functions optimization problem
$$
\begin{array}{l}
\mathop {\min }\limits_{\bf{x}} \,\,\,\sum\limits_{i = 1}^p {\frac{1} { {b_i}-{{\bf{a}}_i^T{\bf{x}} }}} \\
subject\,\,to:\,\,\,{\bf{x}} \in X
\end{array}$$

where $ b_i-{{\bf{a}}_i^T{\bf{x}}}>0$ and $X$is a convex set. How can I solve the optimization problem?

Best Answer

Again, the answer to "how to solve the problem" depends in large part on what $X$ is, what software you intend to use, etc. But as I said in the comment, at least the objective function can be expressed in second-order-cone or semidefinite form. To see how, create new variables $y\in\mathbb{R}^p$ and write the problem as follows: \begin{array}{ll} \text{minimize} & \sum_{i=1}^p y_i \\ \text{subject to} & a_i^Tx<b_i,~ \frac{1}{b_i-a_i^Tx} \leq y_i \quad i=1,2,\dots, p \\ & x \in X \end{array} Now let's use this transformation: $$a_i^Tx<b_i,~\frac{1}{b_i-a_i^Tx} \leq y_i \quad\Longleftrightarrow\quad \begin{bmatrix} y_i & 1 \\ 1 & b - a_i^T x \end{bmatrix} \succeq 0$$ Go ahead and verify this for yourself using the basic definition of positive semidefiniteness. So your problem becomes \begin{array}{ll} \text{minimize} & \sum_{i=1}^p y_i \\ \text{subject to} & \begin{bmatrix} y_i & 1 \\ 1 & b - a_i^T x \end{bmatrix} \succeq 0 \quad i=1,2,\dots, p \\ & x \in X \end{array} So it's ready to be converted to a semidefinite program, assuming that $X$ is semidefinite representable. And $2\times 2$ linear matrix inequalities like these can actually be represented using a second-order cone constraint, since \begin{aligned} \begin{bmatrix} a & b \\ b & c \end{bmatrix} \succeq 0 &\quad\Longleftrightarrow\quad a,c \geq 0, \quad ac-b^2\geq 0 \\ &\quad\Longleftrightarrow\quad a,c \geq 0, \quad \sqrt{b^2 + \left(\tfrac{1}{2}(a-c)\right)^2} \leq \tfrac{1}{2}(a+c)\end{aligned} Go ahead, multiply that out!

Or don't. Use a modeling framework like CVX or YALMIP (disclaimer: the former is mine). If you do, it handles these things for you. In fact, you don't even need to rewrite the objective; in CVX, you can just write it as sum(inv_pos(b-A*x)). But again, this assumes that you can represent $X$ in a manner consistent with these frameworks.

Related Question