[Math] Does the uniqueness of solutions to convex optimization with linear constraints hold in n>3 dimensions

geometrynonlinear optimizationoptimizationsurfaces

This is a repost of an earlier question, where I think I was not clear enough in what I was asking:

I am examining the following optimization problem, for which I would like to know if, when a solution exists, is it unique?

Let $\mathcal{P}_{I}$ be a (known) partition of the set $I := \bigcup\limits_{1}^{N}[a_{i},b_{i}]\in \mathbb{R}$. For each element $p \in \mathcal{P}_{I}$ we are going to assign a quantity $y_{p} \in [0,1]$. Let $\lambda(\centerdot)$ be the Lebesgue measure.

The optimization problem is:

$\min \sum\limits_{p \in \mathcal{P}_{I}} \sqrt{\lambda^{2}(p)+y^{2}_{p}}$

s.t.

$\sum\limits_{p \in \mathcal{P}_{I}}g_{k}(p)L_{k}(y_{p}) = c_{k}, k \in \{1,2….M\}$

Where:
$g_{k}(p)$ is a function of the fixed partition only, while $L_{k}(y_{p})$ is a linear function of the decision variables $y_{i}$

My thinking thus far:

The objective function is strictly concave in $y_{i}$ for any fixed value of the objective function. Geometrically, for any fixed value of the objective function, the function represents a concave right-bounding surface in a high-dimensional vector space (i.e, one dimension per $y_{i}$) that is constrained in the positive region $[0,1]^{n}$.

However, the constraints define a piecewise-planar subspace in $[0,1]^{n}$ Therefore, as we decrease the value of the objective function, it seems that we will gradually get to the point where the feasible set for a given value of the objective function is either a corner-point of the feasbile set or a point of tangency to the surface of the objective function.

Since the objective function is strictly concave, I think I can make the following conclusion:

If the feasbile set consists of more than one point for a given value, K, of the objective function, then there exists a value $L<K$ of the objective function that also has a non-empty feasible set.

This follows from the fact that the dimension of the piecewise-planar feasible region for a given objective function value is always less than the dimension of the objective function. Essentially, I am resting on the analogy of a cutting plane: you cannot cut a strictly concave surface with a plane such that the intersection set has more than one point and the entire concave surface lies at or completely below the plane.

Sorry for the long-winded post, but I wanted to lay out my thinking. I think the above shows that if a minimum exists, the minimium is unique. I also think this extends to infinite-dimensional vector spaces as the norm of the partition approaches zero.

Does this reasoning make sense? Does the intuition of convex optimzation with linear constraints for n $\leq$ 3 dimensions hold for higher dimensions (possibly infinite?)

Thanks 🙂

Best Answer

OK. I've done a bit of reading and found that this is indeed true in both the finite and infinite spaces. The following reference from UCLA shows that the defining property of convex minimization problems is that if a local minimum is found, it is also a global mimimum. Here, the functional equality contraints are affine in y and the surface of the objective function is convex, so this is a convex optimization in a vector space and hence has a unique global solution.

Related Question