There is a neat geometric way to see why this should be so, but I've forgotten some of the details. Some papers of Daubeches, Dohono, etc. contain these details. Unfortunately, I've forgotten these refs too. So, I'll give you the somewhat lazy solution (you probably already figured out that I'm a very lazy person), based on proximal operators and the Mureau identity...
For a convex function $f: \mathbb R^n \rightarrow (-\infty, +\infty]$, define its proximal operator
$$\text{prox}_{\lambda f}(a) := \underset{x \in \mathbb R^n}{\text{argmin }} \frac{1}{2}\|x-a\|_2^2 + \lambda f(x),$$
where $\lambda > 0$ is a varying parameter. Think of this as generalizing the notion of projection onto a convex set, where the indicator function is replaced with a more general $f$.
Your problem amounts to computing the proximal operator of the $\ell_1$-norm.
Define the (Legendre) convex conjugate of $f$ by
$$f^*(a) := \max_{x \in \mathbb R^n}\langle a, x\rangle - f(x).$$
Now, if $f := \|.\|_1$, and we define the cube $C^{(n)} := \{z \in \mathbb R^n|\|z\|_\infty \le \lambda\} = C^{(1)} \times \ldots \times C^{(1)}$, then $f^*\left(\frac{a}{\lambda}\right) = i_{C^{(n)}}(a)$ (see why here), and so
$$\text{prox}_{\frac{1}{\lambda} f^*}\left(\frac{a}{\lambda}\right) = \ldots = P_{C^{(n)}}(a) = (P_{C^{(1)}}(a_1),\ldots,P_{C^{(1)}}(a_n)) = (\lambda \text{sgn}(a_1), \ldots,\lambda\text{sgn}(a_n)),$$
the euclidean projection of $a$ onto the cube $C$. Thus by the Mureau identity, we get
$$ \text{prox}_{\lambda f}(a) = a - \text{prox}_{\frac{1}{\lambda} f^*}\left(\frac{a}{\lambda}\right) = a - P_{C^{(n)}}(a) = (a_1 - \lambda \text{sgn}(a_1), \ldots, a_n - \lambda \text{sgn}(a_n)) = (S_\lambda(a_1),\ldots, S_\lambda(a_n)),$$
where $S_\lambda: t \mapsto \text{sgn}(t)(t - \lambda)_+$, the soft-thresholder.
N.B.: $\text{sgn}(0) := 0$. Also, note that $t = |t|\text{sgn}(t)$, for all $t \in \mathbb R$.
Hope this helps. I can provide finer details if needed.
Best Answer
Coordinate-wise equality constraints are trivial. Just fix $x_i=c_i$ and skip that coordinate during the search. More complex equality constraints are likely not possible here. After all, it's unlikely that you will be able to hold all of the other coordinates fixed and maintain feasibility.
Coordinate-wise bounds are not difficult to deal with, either. When you are working with coordinate $i$, you simply have to respect those corresponding inequality constraints. So you can descend in an unconstrained fashion, but stop if the descent direction takes you outside of the interval $[l_i,u_i]$.
If, at a given iteration, you find that you cannot take a step in a descent direction without violating the constraint, then you don't: $x_i$ remains fixed, and you move to the next coordinate. So for instance, suppose $x_i\neq 0$; then the partial derivative is $$\frac{\partial f}{\partial x_i} = x^TAe_i + b^Te_i + \lambda \mathop{\textrm{sign}}(x_i)$$ If $x_i=l_i$ and this quantitiy is positive, or if $x_i=u_i$ and this quantity is negative, then the value of $x_i$ will not change in this iteration. If $x_i=0$, of course, then you have a subgradient to deal with, but the point is the same.
More complex linear inequality constraints should be possible to handle as long as they define a polyhedron with a non-empty interior (so, e.g., there cannot be "opposing" inequalities creating an effective equality constraint.) Consider constraints $Fx\leq g$, and suppose that the current point $x$ is feasible (that is, $Fx\leq g$), and we're scheduled to move coordinate $i$. Then we seek $[t_\min,t_\max]$ as follows: $$t_\min=\inf\{t~|~F(x+te_i)\leq g\}, \quad t_\max=\sup\{t~|~F(x+te_i)\leq g\}$$ This can be solved analytically; and because $Fx\leq g$, know that $t_\min\leq 0$ and $t_\max\geq 0$. Then we can define $$l_i = x_i + t_\min, \quad u_i = x_i + t_\max$$ and proceed with a coordinate descent constrained to the interval $[l_i,u_i]$, just as we did above.