[Math] Convex optimization: affine equality constraints into inequality constraints

constraint-programmingconvex optimizationconvex-analysisoptimization

I have the following problem:

\begin{equation}
\begin{array}{cll}
\displaystyle \min_{ \mathbf{x} } & & \displaystyle f(\mathbf{x}) \\
\mathrm{s.t.} & & \mathbf{x} \in \mathcal{C} \\
& & \mathbf{A}\mathbf{x}-\mathbf{b}=0.
\end{array}
\end{equation}

with $\mathcal{C}$ closed and convex, $f(\mathbf{x})$ monotonically increasing and convex.

Can I change the affine equality constraint into inequality $\mathbf{A}\mathbf{x}-\mathbf{b}\geq0$? If not , there exists a case in which it holds?

How can I demonstrate it? Maybe some references could be useful.

Thanks

Best Answer

You don't necessarily get the same solution.

Consider $C$ to be the unit ball in $\mathbb{R}^2$ and the corresponding maximization problem where $f$ is concave and we maximize instead(the idea is the same between concave maximization and convex minimization - I just wrote this solution seeing concave maximization). $A = [1 ,1]^T$, $b = 0$, and $f(x) = [1 , 1]^T $x$.

Then, the optimal value for the objective is zero, and any optimizing $x$ must be orthogonal to $[1, 1]^T$. Changing the inequality to be greater than or equal to $0$ specifies any vector in one of the half ball specified by the direction of $[1,1]^T$ is a $x$ which meats the constraints, and the objective is non-zero (in fact, it is maximized for $x = 1/\sqrt{2} [1, 1]^T$.

Back to the convex minimization case:

Since the constraint set is larger for the inequality case than the equality case, you can say the optimal solution for the inequality case is at least as small as the solution for the equality case.

Related Question