I have a nonlinear objective function with a nonlinear set of inequality constraints and I am trying to reformulate the problem using the Lagrangian function. My goal is to transform a constrained problem to non-constrained then apply an optimization method like simulated annealing to solve my new formulation of the problem. Is that even the right approach?
Convert constrained optimization problem to non-constrained using lagrangian and kkt
karush kuhn tuckerlagrange multipliernonlinear optimizationoptimization
Best Answer
$\newcommand{\minimize}{\operatorname*{Minimize}}$You can use the penalty method or the augmented Lagrangian method. Suppose you have a problem of the form
\begin{aligned} \mathbb{P}:\minimize_{g(x) \leq 0} f(x). \end{aligned}
You can solve the sequence of
\begin{aligned} \mathbb{P}_\lambda:\minimize_{x\in\mathbb{R}^n} f(x) + c \left[g(x)\right]_+^2, \end{aligned}
where $\left[z\right]_+$ is defined as
\begin{aligned} \left[z\right]_+ = \begin{cases} 0, &\text{if } z < 0 \\ z, &\text{if } z \geq 0 \end{cases} \end{aligned}
In doing so, you will solve $\mathbb{P}_{\lambda_k}$, increase $\lambda_k$ (e.g., $\lambda_{k+1} = 10 \lambda_k$) and solve the next problem, $\mathbb{P}_{\lambda_k}$, using the previous solution as an initial guess.
Let us denote the set of minimizers of $\mathbb{P}_{\lambda_k}$ by $X_k^\star$ and let $X^\star$ be the set of minimizers of $\mathbb{P}$. Then, it can be shown that
$$ \limsup_k X_k^\star \subseteq X^{\star}, $$
in other words, every cluster point, $x^\star$, of a sequence $(x_k^\star)_k$ with $x_k^\star \in X_k^\star$, is a minimizer of $\mathbb{P}$.
Although in theory you need to take $\lambda$ to infinity in order to recover a solution of the original problem, in practice you can stop the procedure once $[g(x_k^\star)]_+$ drops below a desired tolerance.
Note that there exist penalty functions other than $c \left[g(x)\right]_+^2$ (for example, $c \left[g(x)\right]_+$). You can read more about the penalty method in the book of Nocedal and Wright.
The augmented Lagrangian method is similar, but it uses a different unconstrained formulation.