I believe this is what he is saying (though I could be wrong):
(1) optimality + strong duality $\implies$ KKT (for all problems)
(2) KKT $\implies$ optimality + strong duality (for convex/differentiable problems)
(3) Slater's condition + convex$\implies$ strong duality, so then we have, GIVEN that strong duality holds,
(3a) KKT $\Leftrightarrow$ optimality
If, for a primal convex/differentiable problem, you find points satisfying KKT, then yes, by (2), they are optimal with strong duality.
I believe that you should look up the Fritz John conditions. My opinion is that they are superior to the KKT conditions, in that they incorporate the rather ugly issue of the "constraint qualification" into the Lagrangean by the use of an additional multiplier -and they are able to uncover solutions to an optimization problem that under KKT may pass unnoticed.
The Fritz John conditions have been stated in
"F. JOHN. Extremum problems with inequalities as side conditions. In “Studies and Essays, Courant Anniversary Volume” (K. O. Friedrichs, O. E. Neugebauer and J. J. Stoker, eds.), pp. 187-204. Wiley (Interscience), New York, 1948"
and have been generalized in
"Mangasarian, O. L., & Fromovitz, S. (1967). The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. Journal of Mathematical Analysis and Applications, 17(1), 37-47.
In a simplified setting, assume we want to
\begin{align}
\max_x &f(x)\\
s.t. & g(x) \ge 0\\
& h(x) = 0
\end{align}
Then the Lagrangean in the case of Fritzh John conditions is formed as
$$L_{FJ} = \xi f(x) + \lambda g(x)+\mu h(x) $$
$$\lambda, \mu \ge 0,\qquad \xi \in\{0,1\},\qquad \{\xi , \lambda, \mu\}\neq \mathbf 0$$
The new element is the multiplier $\xi$ on the objective function, which takes only the values zero or unity (after normalization). If a solution necessitates that $\xi =1$, we obtain the KKT conditions with the constraint qualification satisfied. If a solution necessitates that $\xi =0$, it reflects, among other special cases, the case where the constraint qualification fails to hold.
A standard example is the case where the feasible set for $x$ has been reduced to a single point due to the constraints. Then we will find that the only solution dictates that $\xi=0$, which has an intuitive explanation: if $x$ can take one and only one value due to the constraints, then the objective function "plays no role" in the determination of $x$ and so it gets a zero multiplier.
This is not so special a case as it may appear: in our problem they may exist parameters that may vary in some range. And for some combination(s) of their values the feasible set may become a single point.
If you have applied KKT conditions in such a setting, and you used only algebraic calculations, you could end up characterizing such cases as "no solution" while a solution does exist. I have seen it happen - this is why I believe that the FJ conditions are superior.
Best Answer
The KKT conditions are not necessary for optimality even for convex problems. Consider $$ \min x $$ subject to $$ x^2\le 0. $$ The constraint is convex. The only feasible point, thus the global minimum, is given by $x=0$. The gradient of the objective is $1$ at $x=0$, while the gradient of the constraint is zero. Thus, the KKT system cannot be satisfied.