I believe that you should look up the Fritz John conditions. My opinion is that they are superior to the KKT conditions, in that they incorporate the rather ugly issue of the "constraint qualification" into the Lagrangean by the use of an additional multiplier -and they are able to uncover solutions to an optimization problem that under KKT may pass unnoticed.
The Fritz John conditions have been stated in
"F. JOHN. Extremum problems with inequalities as side conditions. In “Studies and Essays, Courant Anniversary Volume” (K. O. Friedrichs, O. E. Neugebauer and J. J. Stoker, eds.), pp. 187-204. Wiley (Interscience), New York, 1948"
and have been generalized in
"Mangasarian, O. L., & Fromovitz, S. (1967). The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. Journal of Mathematical Analysis and Applications, 17(1), 37-47.
In a simplified setting, assume we want to
\begin{align}
\max_x &f(x)\\
s.t. & g(x) \ge 0\\
& h(x) = 0
\end{align}
Then the Lagrangean in the case of Fritzh John conditions is formed as
$$L_{FJ} = \xi f(x) + \lambda g(x)+\mu h(x) $$
$$\lambda, \mu \ge 0,\qquad \xi \in\{0,1\},\qquad \{\xi , \lambda, \mu\}\neq \mathbf 0$$
The new element is the multiplier $\xi$ on the objective function, which takes only the values zero or unity (after normalization). If a solution necessitates that $\xi =1$, we obtain the KKT conditions with the constraint qualification satisfied. If a solution necessitates that $\xi =0$, it reflects, among other special cases, the case where the constraint qualification fails to hold.
A standard example is the case where the feasible set for $x$ has been reduced to a single point due to the constraints. Then we will find that the only solution dictates that $\xi=0$, which has an intuitive explanation: if $x$ can take one and only one value due to the constraints, then the objective function "plays no role" in the determination of $x$ and so it gets a zero multiplier.
This is not so special a case as it may appear: in our problem they may exist parameters that may vary in some range. And for some combination(s) of their values the feasible set may become a single point.
If you have applied KKT conditions in such a setting, and you used only algebraic calculations, you could end up characterizing such cases as "no solution" while a solution does exist. I have seen it happen - this is why I believe that the FJ conditions are superior.
Technically, knowing all of the active constraints isn't enough. For the nonlinear optimization problem
$$
\min\limits_{x\in \mathbb{R}^m} \{f(x) : g(x) = 0, h(x)\geq 0\}
$$
we have the first-order optimality conditions
\begin{align*}
\nabla f(x) + g^\prime(x)^Ty - h^\prime(x)^Tz =& 0\\
g(x) =& 0\\
h(x) \geq& 0\\
z \geq& 0\\
h(x) \circ z =& 0
\end{align*}
where $\circ$ denotes the pointwise product and $g^\prime(x)$ and $h^\prime(x)$ are the Jacobians (total derivatives). There also needs to be some kind of constraint qualification satisfied. Anyway, you can find results like this in Theorem 12.1 from Nocedal and Wright's Numerical Optimization. Anyway, the last requirement, $h(x)\circ z=0$ is the complementary slackness condition and if we break it down it states
$$
h_i(x) z_i = 0
$$
for $i=1,\dots,m$. If we know that a constraint is inactive at optimality, then $h_i(x) > 0$ and that forces $z_i=0$. Likewise, if the constraint is active, then we have $h_i(x)=0$ and $z_i\geq 0$. Anyway, let's say that we knew all of the active constraints, we could decompose $h$ and $z$ into
\begin{align*}
h(x) =& \begin{bmatrix} h_A(x)\\h_I(x)\end{bmatrix}\\
z =& \begin{bmatrix} z_A \\ z_I\end{bmatrix}
\end{align*}
For all of the inactive constraints, we know that $z_I=0$, so the optimality conditions become
\begin{align*}
\nabla f(x) + g^\prime(x)^Ty - h_A^\prime(x)^Tz_A =& 0\\
g(x) =& 0\\
h_A(x) =& 0\\
h_I(x) >& 0\\
z_A \geq& 0
\end{align*}
Therein lies the rub. We still have to satisfy the inactive constraints, so we can't ignore them. We still also have to constrain the Lagrange multipliers that correspond to the active constraints.
This is why we typically need specialized algorithms to solve inequality constrained algorithms like projection, active set, or interior point methods. They take care of the nonnegativity and complementary slackness constraints in their own ways.
Best Answer
It's possible for a convex optimization problem to have an optimal solution but no KKT points. Constraint qualifications such as Slater's condition, LICQ, MFCQ, etc. are necessary to ensure that an optimal solution will satisfy the KKT conditions.
For example, consider the problem
$\min x_{2}$
subject to
$(x_{1}-1)^{2}+x_{2}^{2} \leq 1$
$(x_{1}+1)^{2}+x_{2}^{2} \leq 1$
Here, the only feasible point is $x^{*}_{1}=0$, $x^{*}_{2}=0$. Thus that point is an optimal solution. However, you can check and you'll find that the KKT conditions are not satisfied at $x^{*}$.