Change in the optimal value of a linear program after relaxing one of the constraints

convex optimizationlinear programmingoptimization

We have a linear program
\begin{align}
\max_{x} f:= c^T x ~~~\text{subject to}~~~ h_i(x)= 0, ~i=1,\ldots,m ~~~\text{and}~~~ x\geq \epsilon_1
\end{align}

with optimal solution $x_1^*\in\mathbb{R}^n$, where $c> 0$ and $\epsilon_1 > 0$. One of the constraints is that $1^T x = 1$, i.e., the entries of $x$ sum up to 1.

Suppose, we relax the last constraint to $x\geq \epsilon_2$, where $0< \epsilon_2 < \epsilon_1$. Clearly, $f(x_2^*)\geq f(x_1^*)$, where $x_2^*$ is the optimal solution to the program after relaxing the last constraint. However, can we obtain an upper bound on the difference between the optimal values $f(x_2^*) – f(x_1^*)$ in terms of $c, \epsilon_1, \epsilon_2$? I was thinking maybe one could use a water-filling type argument to obtain a bound as a function of $(\epsilon_1-\epsilon_2)(c_{max} – c_{min})$, where $c_{max}$ and $c_{min}$ are the largest and smallest coefficients in $c$, but I am not sure if this is correct. Could someone advise?

Best Answer

There's no bound of this type.

Say that $n=3$ and the constraints (in addition to the constraint $x_1 + x_2 + x_3$ that you specified) limit us to points that lie on the line through $A := (1-2\epsilon_1, \epsilon_1, \epsilon_1)$ and $B := (\epsilon_2, \epsilon_2, 1 - 2 \epsilon_2)$. Take $c_1 = c_2 = 1$ and $c_3 = 1+M$, so that the objective value is $c_1 x_1 + c_2 x_2 = c_3 x_3 = 1 + Mx_3$.

With the constraint $x_1, x_2, x_3 \ge \epsilon_1$, the optimal point will be $A$, with objective value $1 + \epsilon_1 M$.

With the constraint $x_1, x_2, x_3 \ge \epsilon_2$, the optimal point will be $B$, with objective value $1 + (1-2\epsilon_2)M$.

Let's quickly see why this is. We can parametrize the points on this line as $(1-t)A + tB$. The objective value changes linearly with $t$, and it is bigger at $B$ than at $A$, so it is increasing with $t$. Meanwhile, in this parametrization, we have $x_2 = (1-t)\epsilon_1 + t \epsilon_2 = \epsilon_1 - t(\epsilon_1 - \epsilon_2)$. In the first case, the constraint $x_2 \ge \epsilon_1$ translates to $t \le 0$, so we take $t=0$ and get point $A$. In the second case, the constraint $x_2 \ge \epsilon_2$ translates to $t\le 1$, so we take $t=1$ and get point $B$.

Anyway, from point $A$ to point $B$, the change in optimal value is $(1 - \epsilon_1 - 2\epsilon_2)M$, which is basically $c_{\max} - c_{\min}$, and certainly not bounded by any function of $(\epsilon_1 - \epsilon_2)(c_{\max} - c_{\min})$.

We can prove an upper bound of $c_{\max} - c_{\min}$ on the change in the optimal value, just because the objective value $c^{\mathsf T}x$ always lies in the range $[c_{\min}, c_{\max}]$, but that's not very interesting.


On the other hand, keep in mind that the definition of this LP depends on $\epsilon_1$ and $\epsilon_2$. To make it clear, let $P$ be some LP with constraints of this type, and $F_P(\epsilon)$ be the optimal value of $P$ when $x \ge \epsilon$ is added as a constraint.

We get a feasible program for $\epsilon \in [0,a]$ for some $a$, and in that range, $F_P(\epsilon) \in [c_{\min}, c_{\max}]$, so the "average" rate of change $F_P(x) - F_P(x+\delta)$ for "many" values of $x$ and $\delta$ is at most $\delta(c_{\max}-c_{\min})$, which is what you expected.

More precisely, $F_P$ is a piecewise linear function on $[0,a]$, so it has finitely many slopes - and some maximum slope. Therefore:

  • Given $P$, we can find a constant $C$ such that $F_P(\epsilon_2) - F_P(\epsilon_1) \le C(\epsilon_1 - \epsilon_2)$ (for all $\epsilon_1, \epsilon_2$), it's just that $C$ might not necessarily depend on $c_{\max} - c_{\min}$ alone.
  • The average slope is $c_{\max}-c_{\min}$, so there is some interval $I$ (also depending on $P$) such that we can take $C = c_{\max}-c_{\min}$ for $\epsilon_1, \epsilon_2 \in I$.

However, if you give me $\epsilon_1 > \epsilon_2$ before I say what $P$ is, then I can cook up a $P$ (for example, the one I cooked up above) such that $F_P(\epsilon_2) - F_P(\epsilon_1)$ is nearly all of $c_{\max} - c_{\min}$. For this $P$, $F_P$ has a very sharp change in the interval $[\epsilon_2, \epsilon_1]$ and is nearly constant otherwise.