I believe that you should look up the Fritz John conditions. My opinion is that they are superior to the KKT conditions, in that they incorporate the rather ugly issue of the "constraint qualification" into the Lagrangean by the use of an additional multiplier -and they are able to uncover solutions to an optimization problem that under KKT may pass unnoticed.
The Fritz John conditions have been stated in
"F. JOHN. Extremum problems with inequalities as side conditions. In “Studies and Essays, Courant Anniversary Volume” (K. O. Friedrichs, O. E. Neugebauer and J. J. Stoker, eds.), pp. 187-204. Wiley (Interscience), New York, 1948"
and have been generalized in
"Mangasarian, O. L., & Fromovitz, S. (1967). The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. Journal of Mathematical Analysis and Applications, 17(1), 37-47.
In a simplified setting, assume we want to
\begin{align}
\max_x &f(x)\\
s.t. & g(x) \ge 0\\
& h(x) = 0
\end{align}
Then the Lagrangean in the case of Fritzh John conditions is formed as
$$L_{FJ} = \xi f(x) + \lambda g(x)+\mu h(x) $$
$$\lambda, \mu \ge 0,\qquad \xi \in\{0,1\},\qquad \{\xi , \lambda, \mu\}\neq \mathbf 0$$
The new element is the multiplier $\xi$ on the objective function, which takes only the values zero or unity (after normalization). If a solution necessitates that $\xi =1$, we obtain the KKT conditions with the constraint qualification satisfied. If a solution necessitates that $\xi =0$, it reflects, among other special cases, the case where the constraint qualification fails to hold.
A standard example is the case where the feasible set for $x$ has been reduced to a single point due to the constraints. Then we will find that the only solution dictates that $\xi=0$, which has an intuitive explanation: if $x$ can take one and only one value due to the constraints, then the objective function "plays no role" in the determination of $x$ and so it gets a zero multiplier.
This is not so special a case as it may appear: in our problem they may exist parameters that may vary in some range. And for some combination(s) of their values the feasible set may become a single point.
If you have applied KKT conditions in such a setting, and you used only algebraic calculations, you could end up characterizing such cases as "no solution" while a solution does exist. I have seen it happen - this is why I believe that the FJ conditions are superior.
Since it doesn't seem that anybody is giving an answer I will slightly elaborate on my comments above. The first thing to point out is that KKT conditions don't give a "procedure" as you're question implies. Rather, KKT conditions give a "target" for procedures to move towards.
KKT conditions are primarily a set of necessary conditions for optimality of (constrained) optimization problems. This means that if a solution does NOT satisfy the conditions, we know it is NOT optimal. In particular cases, the KKT conditions are stronger and are necessary and sufficient (e.g., Type 1 invex functions). In these cases, if a solution satisfies the system of KKT conditions it is globally optimal.
So what do the KKT equations do for us? By giving us a system of equations, we can attempt to find a solution to them. Typically, we can't solve these equations analytically, so we use numerical methods to solve them (e.g., sequential quadratic programming).
If you have specific questions about numerical (or exact) methods in given contexts, I'd suggest asking a new question with those details.
Best Answer
The KKT conditions encode the following two observations. At the optimal solution,
The gradient of the objective function is perpendicular to the constraint set, and
The constraint is satisfied.
Condition 2. comes from the definition of the optimization problem.
If condition 1. is not satisfied at a point on the constraint surface, you can increase the value of the objective function by walking along the constraint surface in the direction of the projection of the gradient onto the tangent hyperplane to the constraint surface.
Imagine a bead constrained to slide on the constraint surface, and the gradient vector as a force pushing on it. The bead can only be at equilibrium if the force pushing on it is perpendicular to the surface it is constrained on.
Or imagine walking on a path in the mountains that doesn't go to a peak. At the high point the most uphill direction is perpendicular to the path.
With an inequality constraint there are two distinct possibilities.
The optimal point could be on the boundary of the inequality constraint feasible set. Then this optimal point also solves the modified optimization problem where the inequality constraint is replaced with an equality constraint. So, in this case we can use the Lagrange multiplier result for equality constrained problems.
Or, the optimal point could be in the interior of the inequality constraint feasible set, in which case the constraint is (locally) inactive and might as well not be there.
Complimentary slackness encodes both these possibilities. You have a product of two things that must be zero. If the constraint portion of the multiplicand is zero, then you are on the boundary of the constraint set, whereas if the multiplier portion of the multiplicand is zero then you are not on the boundary and the constraint is inactive.
Dual feasibility (positivity of the inequality multipliers) ensures that if you are on the boundary of an inequality constraint, you could not improve things by moving into the interior. From the previous example, the constraint surface can only exert a "reaction force" on the bead in the inward direction to keep it inside, but cannot apply a reaction force in the outward direction.