[Math] Kuhn Tucker conditions, and the sign of the Lagrangian multiplier

karush kuhn tuckerlagrange multiplieroptimization

I was under the impression that under the Kuhn-Tucker conditions for a constrained optimisation, with inequality constraints the multipliers must follow a non-negativity condition.

i.e. $$\lambda \geq 0$$

Operating with complementary slackness with the constraint $g(x,y)<c$.

i.e. $$\lambda * [g(x,y)-c] = 0$$

However I was just attempting to solve this problem,

enter image description here

And according to the solutions the related condition for the Lagrangian multiplier is $\lambda \leq 0$, whereas I had incorrectly formulated it as $\lambda \geq 0$

I have otherwise managed to solve the question, finding if the constraint is active, $$x^*= (2a-1)/5 $$ $$y^*=(a+2)/5$$ $$\lambda=(2a-6)/5$$

and $(1,1)$ if the constraint was not active.

However as my Lagrangian inequality was the wrong way round I stated the constraint would be active if $a>3$ and inactive $a \leq 3$, which I can see is incorrect if I plot the graphs.

Can someone please explain in what circumstances the multiplier condition would be $\leq 0$ rather than $\geq 0$, is it because it's a minimisation problem rather than a maximisation problem?
Also as a sub point, if I understand correctly, when minimising using the lagrangian method with inequality constraints one is meant to multiply the objective function by -1, and then maximise it, I didn't do that, I just plugged it in as it is, though I fail to see how that would effect the condition of the multiplier, and my optimising points still seem to agree with the solutions.

Any guidance on the matter is greatly appreciated.

Best Answer

The reason that you get the multiplier condition $\lambda \leq 0$ rather than $\lambda \geq 0$ is that you have a minimization problem, as you correctly indicate.

Intuitively, if the constraint $g(x,y)\leq c$ is active at the point $(x^*,y^*)$, then $g(x,y)$ is increasing as you move from $(x^*,y^*)$ and out of the domain, and $g(x,y)$ is decreasing as you move from $(x^*,y^*)$ and into the domain. If $(x^*,y^*)$ is the solution of the minimization problem, then $f(x,y)$ has to be increasing when you move from $(x^*,y^*)$ into the domain and decreasing when you move from $(x^*,y^*)$ and out of the domain. Since $g(x,y)$ and $f(x,y)$ are increasing in opposite directions, the Lagrange multiplier cannot be positive.

To show this more formally, it is convenient to use properties of gradient vectors.