There is an error with the Lagrangian: it should contain $\lambda_i^-(l-x_i)$ (not $\lambda_i^-(-l-x_i)$), which also effects the slackness condition, which should be $\lambda_i^-(l-x_i)=0$.
1) The KKT system you have written is nothing else than the subgradient optimality condition
$$
0\in \partial J(x),
$$
where
$$
J(x) = f(x) + c\|x\|_1 + I_{(-\infty,u]}(x) + I_{[l,+\infty)}(x),
$$
where $I_{(-\infty,u]}$ and $I_{[l,+\infty)}$ are the indicator functions of
the boxes $(-\infty,u]$ and $[l,+\infty)$.
Now verify that the conditions for the subgradient sum rule are fulfilled, and write out the individual subgradients to obtain your system.
2) The KKT conditions of convex programs are always sufficient. They are necessary under constraint qualifications (here: validity of subgradient sum-rule, which holds for your case).
3) Yes, follows from the (corrected) complementarity slackness #2.
There's a typo in your expression for the derivative of the Lagrangian. It should be
$$
\frac{\partial L}{\partial x_i}=c_i{\color{red}-}\lambda_i+\mu\ .
$$
It's probably easier to recognise the solution of this problem by guesswork than by trying to solve the Karush-Kuhn-Tucker conditions, but here's one way of doing the latter.
Since $\ \sum_\limits{i=1}^n x_i=1\ $, then $\ x_i>0\ $ for at least one $\ i\in\{1,2,\dots,n\}\ $. Let $\ i_1, i_2, \dots, i_r\ $ be those indices for which $\ x_{i_j}>0\ $. From the condition $\ \lambda_i x_i=0\ $, we then have $\ \lambda_{i_j}=0\ $ for $\ j=1,2,\dots,r\ $. Then from the conditions $\ \frac{\partial L}{\partial x_{i_j}}=0\ $, we get $\ \mu=-c_ {i_j}\ $. Thus, for all the indices $\ i_j\ $, $\ \mu=-c_ {i_j}\ $ must have the same value. So if the $\ c_i\ $ are all different, we must have $\ r=1\ $, and $\ x_{i_1}=1\ $ will be the only variable with a positive value. In any case, for any index $\ i \not\in\{i_1,i_2,\dots,i_r\}\ $, the conditions $\ \frac{\partial L}{\partial x_i}=0\ $ give
$$
\lambda_i =c_i+\mu= c_i-c_{i_j}\ge 0\ .
$$
That is $\ c_{i_j}\le c_i\ $ for all $\ j=1,2,\dots,r\ $ and $\ i \not\in\{i_1,i_2,\dots,i_r\}\ $. In other words, $\ c_{i_j}=\min(c_1,c_2,\dots,c_n)\ $ for all $\ j=1,2,\dots,r\ $.
If $\ i_1\ $ is the only value of $\ i\ $ for which $\ c_i=$$ \min(c_1,c_2,\dots,c_n)\ $, then the solution is unique: $\ x_{i_1}=1\ $, and $\ x_i=0\ $ for $\ i\ne i_1\ $. If $\ r>1\ $, however, then any assignment of non-negative values to $\ x_{i_1}, x_{i_2}, \dots, x_{i_r}\ $ such that $\ \sum_\limits{i=1}^r x_{i_j}=1\ $, and setting $\ x_i=0\ $ for $\ i \not\in\{i_1,i_2,\dots,i_r\}\ $, will achieve the minimum, $\ \min(c_1,c_2,\dots,c_n)\ $, of the objective.
Best Answer
You are almost there. First, notice that $c - \lambda_1 \mathbf{1} - \lambda_2 = \mathbf{0} \implies \lambda_2 = c - \lambda_1 \mathbf{1}$. Plugging this back into $L$ we get that, \begin{aligned} L &= c^{\mathsf{T}} x - \langle \lambda_1,\mathbf{1}^{\mathsf{T}}x-b\rangle - \langle \lambda_2,x\rangle\\ &= c^{\mathsf{T}} x - \langle \lambda_1,\mathbf{1}^{\mathsf{T}}x-b\rangle - \langle c-\lambda_1\mathbf{1},x\rangle \\ &= \lambda_{1}b \end{aligned} Second, you are technically missing the condition that $\mathbf{1}^{\mathsf{T}}x=b$ and $x\geq 0$ in your KKT conditions.
Lastly, you found that for all $i$ that $c_i \geq \lambda_1$. There are two cases you need to check, $\lambda_1 = \min_i c_i$ or $\lambda_1 < \min_i c_i$. If $\lambda_1 < \min_i c_i$ then $[\lambda_2]_i > 0$ for all $i$, which implies that $x_i = 0$ for all $i$. This can only occur if $b=0$ and is a kind of a degenerate case since the constraint $\mathbf{1}^{\mathsf{T}}x = 0$ plus $x\geq 0$ implies $x = 0$. If $b>0$ then it must be the case that $\lambda_1 = \min_i c_i$. From here it should be straightforward to determine what $\lambda_2$ is and also what $x_i$ is.