Both sets of conditions are necessary conditions for a point to be optimal, but they're not quite the same mathematically. The KKT conditions are more restrictive and thus shrink the class of points (from those satisfying the Fritz John conditions) that must be tested for optimality. The additional restriction with KKT is that the Lagrange multiplier on the gradient of the objective function cannot be zero. One of the most important resulting differences is that KKT points for linear programs must be optimal, whereas Fritz John points for linear programs don't have to be.
The section on KKT conditions in Bazarra, Sherali, and Shetty's Nonlinear Programming: Theory and Algorithms (second edition) has a nice discussion of the issues. There are several good examples, especially one in which a Fritz John point for a linear program is shown not to be optimal. That can't happen with KKT points for linear programs.
I believe that you should look up the Fritz John conditions. My opinion is that they are superior to the KKT conditions, in that they incorporate the rather ugly issue of the "constraint qualification" into the Lagrangean by the use of an additional multiplier -and they are able to uncover solutions to an optimization problem that under KKT may pass unnoticed.
The Fritz John conditions have been stated in
"F. JOHN. Extremum problems with inequalities as side conditions. In “Studies and Essays, Courant Anniversary Volume” (K. O. Friedrichs, O. E. Neugebauer and J. J. Stoker, eds.), pp. 187-204. Wiley (Interscience), New York, 1948"
and have been generalized in
"Mangasarian, O. L., & Fromovitz, S. (1967). The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. Journal of Mathematical Analysis and Applications, 17(1), 37-47.
In a simplified setting, assume we want to
\begin{align}
\max_x &f(x)\\
s.t. & g(x) \ge 0\\
& h(x) = 0
\end{align}
Then the Lagrangean in the case of Fritzh John conditions is formed as
$$L_{FJ} = \xi f(x) + \lambda g(x)+\mu h(x) $$
$$\lambda, \mu \ge 0,\qquad \xi \in\{0,1\},\qquad \{\xi , \lambda, \mu\}\neq \mathbf 0$$
The new element is the multiplier $\xi$ on the objective function, which takes only the values zero or unity (after normalization). If a solution necessitates that $\xi =1$, we obtain the KKT conditions with the constraint qualification satisfied. If a solution necessitates that $\xi =0$, it reflects, among other special cases, the case where the constraint qualification fails to hold.
A standard example is the case where the feasible set for $x$ has been reduced to a single point due to the constraints. Then we will find that the only solution dictates that $\xi=0$, which has an intuitive explanation: if $x$ can take one and only one value due to the constraints, then the objective function "plays no role" in the determination of $x$ and so it gets a zero multiplier.
This is not so special a case as it may appear: in our problem they may exist parameters that may vary in some range. And for some combination(s) of their values the feasible set may become a single point.
If you have applied KKT conditions in such a setting, and you used only algebraic calculations, you could end up characterizing such cases as "no solution" while a solution does exist. I have seen it happen - this is why I believe that the FJ conditions are superior.
Best Answer
I'd ususally put this in a comment, but I do not have enough reputation (my comment above is from the mathematica.stackexchange forums). There is actually not much good literature on optimization, that I can really recommend. Most of my knowledge comes from German literature, which I will not mention here. One book I can recommend though is the well known book "Convex Analysis and Optimization" from Bertsekas. It's not perfect and no quick introduction either, but it is well written and does a good job explaining things (which unfortunately is pretty rare in math literature). Another good book seems to be convex optimization from Boyd. If anybody knows other good literature on (convex) optimization, I'd like to hear about it.