Equivalence of two Lagrangian Optimization Problems with equal restrictions

lagrange multipliernonlinear optimizationoptimization

Assume we have given two strictly convex functions $f_1$ and $f_2$ that map from $\mathbb{R}^p$ to $\mathbb{R}$. We want to find the vector $X\in \mathbb{R}^p$ that optimizes them.

Both functions have equivalent minimum in the sense that solving the gradients $\nabla f_1 \neq \nabla f_2$ result in the same unique solution.

If we additionally introduce a set of constraints $g(X)=0$ and form two Lagrangians $$ \mathcal{L}_1(X,\lambda)=f_1(X) +\lambda^Tg(X) $$
$$ \mathcal{L}_2(X,\gamma)=f_2(X) +\gamma^Tg(X) $$
where $\lambda$ and $\gamma$ are vectors of Lagrange multipliers. Now the question is, whether the solutions to the constrained problems will coincide as well.

Best Answer

The answer is no. Consider $f_1(x) = x_1^2 + x_2^2$, $f_2(x) = x_1^2 + 2 x_2^2$ and $g(x) = x_1 + x_2 - 1$.

Since $g(x)=0$, you can substitute $1-x_1 = x_2$. The first constrained problem is threfore $\min_{x_1} \{ x_1^2 + (1-x_1)^2 \}$ (minimum at 0.5) while the second problem is $\min_{x_1} \{ x_1^2 + 2(1-x_1)^2 \}$ (minimum at 2/3).

Related Question