Optimal solution to the primal and the Lagrangian dual when there is duality gap

convex optimizationduality-theoremsnon-convex-optimizationoperations researchoptimization

Consider the primal problem $\max_{x\in\mathcal{X}} f(x)$ subject to $g(x)\leq 1$ and its Lagrangian dual $\min_{\lambda\geq 0}\max_{x\in\mathcal{X}}\{f(x)+\lambda(1-g(x)\}$ with no convexity assumptions. Assume $f,g$ are bounded, both primal and dual admits an optimum, and $g(0)=0$, then it's possible to have a duality gap.

Let $x_P$ be the optimal primal solution and $(\lambda_D,x_D)$ be the optimal dual solution, then is it true that $x_P=x_D$? I don't think it's true but I couldn't find an example.

Best Answer

Suppose that a duality gap does occur and that $x_P=x_D.$ The duality gap means that $$f(x_D) + \lambda_D(1-g(x_D)) > f(x_P),$$ so $x_P=x_D$ means $$\lambda_D(1-g(x_P)) > 0.$$ Thus $\lambda_D > 0$ and $g(x_P) < 1.$

Now suppose that $\epsilon > 0$ is small enough that $\lambda_D - \epsilon > 0,$ and suppose that $\hat{x}\in X$ maximizes $f(x) + (\lambda_D - \epsilon)(1 - g(x))$ over $X.$ From the optimality of the original dual solution $(\lambda_D, x_D),$ $$f(\hat{x}) + (\lambda_D - \epsilon)(1 - g(\hat{x})) \ge f(x_D) + \lambda_D(1-g(x_D))\\= f(x_P) + \lambda_D(1-g(x_P)).$$ On the other hand, optimality of $\hat{x}$ for $\lambda = \lambda_D-\epsilon$ means $$f(\hat{x}) + (\lambda_D - \epsilon)(1 - g(\hat{x})) \le f(x_P) + (\lambda_D - \epsilon)(1 - g(x_P)).$$ Put those together and we have $$\lambda_D(1-g(x_P)) \le (\lambda_D-\epsilon)(1-g(x_p)).$$ Since $1-g(x_P)>0,$ we can divide it out and get $$\lambda_D \le \lambda_D - \epsilon,$$ a contradiction.