[Math] Maximization via Lagrange multipliers vs. substitution and partial derivatives

calculuslagrange multipliermultivariable-calculusoptimizationpartial derivative

Consider the example of maximizing $x^2 y z$ under the constraint that $x^2 + y^2 + z^2 = 5$.

One way to do this is to use lagrange multipliers, solving the system of equations

$$2xyz = 2x \lambda$$
$$x^2 z = 2 y \lambda$$
$$x^2 y = 2 z \lambda$$
$$x^2 + y^2 + z^2 = 5$$


However, couldn't you just substitute $x^2 = 5 – y^2 – z^2$ into the expression you want to maximize to get: $y z \left(-y^2-z^2+5\right)$

and then just maximize that by setting the $y$ and $z$ partial derivatives equal to zero?

Then you just have to solve the arguably simpler system of equations:

$$3 y^2 z+z^3=5 z$$
$$y^3+3 y z^2=5 y$$

where the $z$s and $y$s cancel out nicely on both sides.

Why is maximize by lagrange multipliers necessary when you can always substitute and maximize the resulting function?

When should you choose one over the other?

Best Answer

If you're going to substitute like this, be careful to make sure you're not throwing out information when you do. For instance, the initial constraint of $x^2 + y^2 + z^2 = 5$ implies in particular that $y^2 + z^2 \leq 5$. If you blindly substitute $x^2 = 5-y^2 - z^2$ as above, you'll end up trying to maximize $yz(5 - y^2 - z^2)$ with no constraint on $y$ or $z$ whatsoever, when in fact you should maximize it on the disk $y^2 + z^2 \leq 5$.