[Math] solving multiple linear programming problems with the same set of constraints

linear programmingoc.optimization-and-controlreference-request

Hi,

I need to solve a set of linear programs of the form:

Problem $i$: $\quad \max c_i \cdot x$ s.t. $ A x \leq b$.

The $c_i$'s are different vectors so each problem has a different objective function.
I currently solve them one by one using the simplex algorithm.
(In my particular problem there are a few thousands such problems, each with
a few hundreds variables and constrains).

Is there any way to use the fact that all problems share the same constrains
in order to speed-up their solution? (my guess is that the simplex algorithm
may not be a good candidate for this problem since it simply traverses
the solution space and for different objective functions it will need explore different
parts of the space)

I don't see any obvious relation between the different
$c_i$'s, so we can assume for example that they're drawn at random (each coordinate
can be an i.i.d. standard Gaussian).
I've considered looking at the dual case – then you get the same objective function but
each problem has its own constrains, but don't have a concrete plan exploiting this fact.

Googling found some stuff on 'linear programs with multiple objective functions' but it seems like they try to achieve one solution which is 'reasoanble' with respect to each objective function. I want to find the exact (different) solution for each objective, and
couldn't find a reference for that. Is the computational complexity simply linear in the number of problems, or can we do better?

Best Answer

Where do the different objective functions come from? Perhaps if we knew more about your problem we could make a helpful suggestion about how to approach it.

If the objective functions are totally unreleated to each other, then other than having a feasible solution to the LP, solving one of your LP's isn't very useful for solving any of the other LP's.

The standard approach that you should be aware of is "warm starting" the simplex method with the optimal basis for problem i as the starting point for the solution of problem i+1. Note that although warm starting the simplex method can be very helpful for small changes in the objective function, it can actually slow things down if there are large changes to the objective function. Most modern solvers use a heuristic "crash basis" to start the simplex method and these usually work very well.

For interior point methods for linear programming there are no well developed methods for warm starting with a new objective function.

Related Question