Constrained minimization with unbounded objective function

constraintslagrange multiplieroptimization

Consider the following constrained minimization problem

$$
\begin{align}
\text{minimize} && -2 x +1 \\
\text{with respect to} && x \in \mathbb{R} \\
\text{subject to} && x \geq 0
\end{align} \tag{1}
$$

A minimum does not exist, as the objective function decreases unbounded for $x \rightarrow \infty$.

However, I want to show that now using Lagrange multipliers:

$$
\begin{align}
L(x, \lambda) &= f(x) + \lambda g(x) \\
f(x) &= -2x + 1 \\
g(x) &= -x
\end{align}
$$

The partial derivatives of $L$ set to zero give the following equations

\begin{align}
\frac{\partial}{\partial x}L(x, \lambda) &= -\lambda – 2 = 0\\
\frac{\partial}{\partial \lambda}L(x, \lambda) &= -x = 0
\end{align}

From this would follow that $x = 0$. But why is here $x = 0$, which is the argument which gives the largest function value?

Question: How to derive the correct answer, namely that the objective function has no minimum, using the above approach?

Best Answer

The Lagrange multipliers technique furnishes the lagrangian stationary points. After, those points should be qualified. This qualification should indicate if any of the determined stationary points is a minimum point. So you can formulate

$$ L(x,\lambda,s) = -2x+1+\lambda(x-s^2) $$

Here $s$ is a slack variable needed because the technique only handles equality restrictions.

The stationary points are obtained by solving

$$ \nabla L = 0 = \left\{ \begin{array}{c} \lambda -2=0 \\ x-s^2=0 \\ \lambda s=0 \\ \end{array} \right. $$

giving

$$ x = 0,\ \ \lambda = 2,\ \ s = 0 $$

Here $s=0$ shows us that the restriction is actuating. Analyzing now the restriction gradient which is $1$ and considering that $\min_x f(x) = -2x+1\equiv \max_x -f(x) = 2x-1$ we see that $\nabla( -f(x)) = 2$ so the objective function grows without limit inside the feasible region $(x \ge 0)$ hence the found stationary point cannot be qualified as a minimum point.

Related Question