Is Lagrangian relaxation convexity optimization problem or not

convex optimizationoptimizationrelaxations

We know that for a regular maximization LP problem, it should be
$$z^* = \max_x c^Tx \ s.t. x \in X, Ax \leq b$$
where $b \in \mathbb{R}^m$. There is a technique called Lagrangian relaxation, which can make the problem easier to solve. The Lagrangian relaxation is given by
$$z(u) = \max_x c^Tx + u^T(b-Ax) \ s.t. x \in X$$
The Lagrangian relaxation $z(u)$ can provide an upper bound on $z^*$ for any $u \geq 0, u \in \mathbb{R}^m$. Therefore, the tightest possible Lagrangian relaxation is
$$\min_{u \geq 0}z(u)$$
It is a good upper bound of the original solution. But the textbook also mentions that the above minimization problem (in u) is a convex optimization problem. Could someone explain why $z(u)$ is a convex function of $u$? Thank you very much.

Best Answer

The maximum of convex functions is convex. For example, $f(u) = \max\{2u, 1+u, u^2\}$ is convex. The function $z$ is the maximum of linear functions (one function for each $x$).

Related Question