Minimize the sum of $r$-largest entries with constraints

convex optimizationlinear programmingoptimization

I was being stuck by this optimization problem for a long time. My optimization problem is about to minimize the sum of the largest $r$ entries of a vector.
$$
min_x \sum_{l=1}^r(Ax)_{[l]}~~s.t. Bx\le b
$$

where $x\in \mathbb{R}^n$ is the decision variable, and $(\cdot)_{[l]}$ is the $l$-th largest componet of $(\cdot)$.

The objective is convex and so do the constraints (see Example 3.14 on page 87 of "Convex Optimization" by Prof. Boyd). Furthermore, this problem can be transformed into the following linear programming
$$
\begin{aligned}
min_{x,z}~z\\
s.t.~ Bx\le b\\
QAx\le r1
\end{aligned}
$$

where $Q\in\mathbb{R}^{\left(\begin{array}{c}n\\r\end{array}\right)\times n}$ is the combination matrix which has $r$ entries with 1 and $(n-r)$ entries with 0. The matrix $Q$ actually indicates all combination of $r$ of $Ax$'s entries' sum.

This transformation seems to be great, and the transformed problem can be solved using the interior point method. However, there are more than $\left(\begin{array}{c}n\\r\end{array}\right)$ inequality constraints, which makes it hard to solve.

My question is how can we solve this problem more efficiently. I searched online and find that the "sub-gradient method" seems to work. However, this problem may converge rather slow.
Any suggestions and clues are welcomed. Thanks ahead.

Best Answer

You appear to talk about some naive enumerative representation of sum-of-k-largest. You can model that much more efficiently.

The sum of the $k$ largest values of $Ax\in \mathbb{R}^n$ can be represented, when used in a convexity preserving setting, using a scalar $t$ by introducing a scalar $s$ and vector $z\in \mathbb{R}^n$ with

$$ t\geq ks + \sum_{i=1}^n z_i,~z\geq 0,~ z +s \geq Ax $$

I do not have a reference in my head at the moment, although I suspect you can find it in the book you are referring to.

Related Question