Constrained Euler-Lagrange equations: equivalent constraints leading to different equations of motion

calculus-of-variationsconstraintseuler-lagrange-equationlagrange multiplier

I seem to be having a misunderstanding with the concept of constrained Euler-Lagrange equations. Given a Lagrangian $L = L(q_1, q_2, \dot{q}_1, \dot{q}_2; t)$ and a constraint $g(q_1, q_2; t) = 0$, I've learned that we can find the equations of motion as:

\begin{align*}
\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{q}_1}\right)- \left(\frac{\partial L}{\partial q_1}\right) &= \lambda(t)\frac{\partial g}{\partial q_1} \\
\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{q}_2}\right)- \left(\frac{\partial L}{\partial q_2}\right) &= \lambda(t)\frac{\partial g}{\partial q_2} \\
g(q_1, q_2) &= 0
\end{align*}

Where $\lambda(t)$ is a Lagrange Multiplier. Now, suppose the constraint function is given by $g(q_1, q_2) = q_1^2 + q_2^2 – 1$. Then, $\partial g/\partial q_1 = 2q_1$ and $\partial g / \partial q_2 = 2q_2$, and the equations of motion become:

\begin{align*}
\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{q}_1}\right)- \left(\frac{\partial L}{\partial q_1}\right) &= 2\lambda(t)q_1 \\
\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{q}_2}\right)- \left(\frac{\partial L}{\partial q_2}\right) &= 2\lambda(t)q_2\\
q_1^2 + q_2^2 &= 1
\end{align*}

Conversely, suppose that $g(q_1, q_2) = (q_1^2 + q_2^2 – 1)^2$. This induces the same constraint, as—in both cases—$g(q_1, q_2) = 0$ if and only if $q_1^2 + q_2^2 = 1$. However, we now have $\partial g/\partial q_1 = 4(q_1^2 + q_2^2 – 1)q_1$ and $\partial g/\partial q_2 = 4(q_1^2 + q_2^2 – 1)q_2$, which both vanish when $q_1^2 + q_2^2 = 1$. This leads to:

\begin{align*}
\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{q}_1}\right)- \left(\frac{\partial L}{\partial q_1}\right) &= 0 \\
\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{q}_2}\right)- \left(\frac{\partial L}{\partial q_2}\right) &= 0\\
q_1^2 + q_2^2 &= 1
\end{align*}

I feel I must be making a mistake, potentially something fundamental, but I'm not sure where the argument breaks down.

Best Answer

The issue can be illustrated in a simpler fashion:

Take $P_1: \ \ \min_{x^2+y^2 =1} x$ and $P_2: \ \ \min_{(x^2+y^2 -1)^2=0} x$. The two problems are clearly equivalent, and a $\min$ (or $\max$) exists.

For $P_1$ the constraint gradient is always non zero when the constraint is satisfied, so the regularity condition holds and we have $1 + 2 \lambda x = 0$ and $0+ 2 \lambda y = 0$. The first shows that $\lambda \neq 0$ and the second then shows that $y = 0$, etc, etc, and we find the solution $x=-1, y=0$.

For $P_2$ the constraint gradient is always zero when the constraint is satisfied, so the regularity condition does not hold. Hence the Lagrange multiplier technique does not apply. Indeed, if we did apply them we would get $1+2 \lambda (x^2+y^2-1) 2x = 0$ which gives $1 = 0$, which is obviously incorrect.

The underlying issue is that the regularity is used so that the implicit function can be used to create a variable substitution that automatically satisfies the constraint thus reducing the problem to a (locally) unconstrained problem. If regularity does not hold then all bets are off.

Related Question