The primal problem is
\begin{align}
\operatorname{minimize}_x & \quad f_0(x) \\
\text{subject to} & \quad f_i(x) \leq 0 \quad \text{for } i = 1,\ldots, m\\
& \quad Ax = b.
\end{align}
The functions $f_i, i = 0,\ldots,m$ are differentiable and convex.
Assume $x^*$ is feasible for the primal problem (so $f_i(x^*) \leq 0$ for $i = 1,\ldots,m$ and $A x^* = b$) and that there exist vectors $\lambda \geq 0$ and $\eta$ such that
$$
\tag{$\spadesuit$}\nabla f_0(x^*) + \sum_{i=1}^m \lambda_i \nabla f_i(x^*) + A^T \eta = 0
$$
and
$$
\lambda_i f_i(x_i) = 0 \quad \text{for } i = 1,\ldots,m.
$$
Because the functions $f_i$ are convex, equation ($\spadesuit$) implies that $x^*$ is a minimizer (with respect to $x$) of the Lagrangian
$$
L(x,\lambda,\eta) = f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \eta^T(Ax - b).
$$
Thus, if $x$ is feasible for the primal problem, then
\begin{align*}
f_0(x) & \geq f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \eta^T(Ax - b) \\
& \geq f_0(x^*) + \sum_{i=1}^m \lambda_i f_i(x^*) + \eta^T(Ax^* - b) \\
& = f_0(x^*).
\end{align*}
This shows that $x^*$ is a minimizer for the primal problem.
I believe that you should look up the Fritz John conditions. My opinion is that they are superior to the KKT conditions, in that they incorporate the rather ugly issue of the "constraint qualification" into the Lagrangean by the use of an additional multiplier -and they are able to uncover solutions to an optimization problem that under KKT may pass unnoticed.
The Fritz John conditions have been stated in
"F. JOHN. Extremum problems with inequalities as side conditions. In “Studies and Essays, Courant Anniversary Volume” (K. O. Friedrichs, O. E. Neugebauer and J. J. Stoker, eds.), pp. 187-204. Wiley (Interscience), New York, 1948"
and have been generalized in
"Mangasarian, O. L., & Fromovitz, S. (1967). The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. Journal of Mathematical Analysis and Applications, 17(1), 37-47.
In a simplified setting, assume we want to
\begin{align}
\max_x &f(x)\\
s.t. & g(x) \ge 0\\
& h(x) = 0
\end{align}
Then the Lagrangean in the case of Fritzh John conditions is formed as
$$L_{FJ} = \xi f(x) + \lambda g(x)+\mu h(x) $$
$$\lambda, \mu \ge 0,\qquad \xi \in\{0,1\},\qquad \{\xi , \lambda, \mu\}\neq \mathbf 0$$
The new element is the multiplier $\xi$ on the objective function, which takes only the values zero or unity (after normalization). If a solution necessitates that $\xi =1$, we obtain the KKT conditions with the constraint qualification satisfied. If a solution necessitates that $\xi =0$, it reflects, among other special cases, the case where the constraint qualification fails to hold.
A standard example is the case where the feasible set for $x$ has been reduced to a single point due to the constraints. Then we will find that the only solution dictates that $\xi=0$, which has an intuitive explanation: if $x$ can take one and only one value due to the constraints, then the objective function "plays no role" in the determination of $x$ and so it gets a zero multiplier.
This is not so special a case as it may appear: in our problem they may exist parameters that may vary in some range. And for some combination(s) of their values the feasible set may become a single point.
If you have applied KKT conditions in such a setting, and you used only algebraic calculations, you could end up characterizing such cases as "no solution" while a solution does exist. I have seen it happen - this is why I believe that the FJ conditions are superior.
Best Answer
Yes, something stronger is needed. The difficulty with your question is that it's possible that global minimizers do not satisfy KKT, and yet there exists one point satisfying the KKT conditions. For example, the problem \begin{align} \text{min } & -x^2+x^3 \\ \text{s.t. } & x^3(x+1)^3\leq 0 \end{align} has only one global minimizer and only one KKT point, the origin, which is not the global minimizer. This is because minimizers require a constraint qualification to satisfy the KKT conditions. In order for your assertion to be true, you require that all points, and not only one, of the feasible set also satisfy a constraint qualification, like MFCQ. In such context, the result is true because if your problem admits a minimum since every point satisfies MFCQ, this global minimum must satisfy KKT. The global minimizer satisfies KKT, and If your problem has only one KKT point, that KKT point must be the global minimizer. Now you could ask me:
Yes, it's needed. That's because even if only one point does not satisfy a constraint qualification, the global minimizer might be that point, and such a point might not be a KKT one. Now, you could turn back to me again and ask me:
No, it does not help at all. The PSD says that your point is a local minimizer, and being a local minimizer does not imply that it is the global minimizer, since you don't know whether your problem has only one local minimizer.
You could ask:
Yes, you need to test whether your point is the only one satisfying a genuine optimality condition, i.e., a condition that holds for every local minimizer of your problem. In such a case, I would test if there exists another point such that $$\text{it does not satisfy MFCQ at x or the KKT conditions holds at that point x}$$ or, roughly and in symbols, $$x \text{ is KKT or MFCQ does not hold at } x.$$ This last condition holds for every optimal point and being that point unique, this point must be a global minimizer, whenever it exists.
Now you could ask me:
That type of question is exactly what I have done here. I am not aware of a precise definition of constraint qualification. I, and as far I know only myself, would assume that a constraint qualification is any mathematical assertion/proposition that implies Guignard constraint qualification. Quite artificial, but extremely precise, since I would bet that there will never be one constraint qualification that does not imply Guignard constraint qualification, due to Gould and Tolle result.