If you want to find the proximal operator of $\|x\|_{\infty}$, you don't want to compute the subgradient directly. Rather, as the previous answer mentioned, we can use Moreau decomposition:
$$ v = \textrm{prox}_{f}(v) + \textrm{prox}_{f^*}(v)$$
where $f^*$ is the convex conjugate, given by:
$$ f^*(x) = \underset{y}{\sup}\;(x^Ty - f(y))$$
In the case of norms, the convex conjugate is an indicator function based on the dual norm, i.e. if $f(x) = \|x\|_p$, for $p \geq 1$, then $f^*(x) = 1_{\{\|x\|_q \leq 1\}}(x)$, where $1/p + 1/q = 1$, and the indicator function is:
\begin{equation}
1_S(x)=\begin{cases}
0, & \text{if $x \in S$}.\\
\infty, & \text{if $x \notin S$}.
\end{cases}
\end{equation}
For your particular question, $f(x) = \|x\|_{\infty}$, so $f^*(x) = 1_{\{\|x\|_1\leq 1\}}(x)$.
We know
$$\textrm{prox}_{f}(x) = x - \textrm{prox}_{f^*}(x)$$
Thus we need to find
$$\textrm{prox}_{f^*}(x) = \underset{z}{\arg\min} \; \left(1_{\{\|z\|_1 \leq 1\}} + \|z - x\|_2^2 \right)$$
But this is simply projection onto the $L_1$ ball, thus the prox of the infinity norm is given by:
$$ \textrm{prox}_{\|\cdot\|_{\infty}}(x) = x - \textrm{Proj}_{\{\|\cdot\|_1 \leq 1\}}(x)$$
The best reference for this is Neal Parikh, Stephen Boyd - Proximal Algorithms.
Observing that in the context of the problem the following 2 problems are equivalent:
Problem 001
$$
\begin{alignat*}{3}
\arg \min_{x} & \quad & \frac{1}{2} \left\| A x - b \right\|_{2}^{2} \\
\text{subject to} & \quad & \boldsymbol{1}^{T} x = 1 \\
& \quad & {x}_{i} \in \left[ 0, 1 \right], \; \forall i
\end{alignat*}
$$
Problem 002
$$
\begin{alignat*}{3}
\arg \min_{x} & \quad & \frac{1}{2} \left\| A x - b \right\|_{2}^{2} \\
\text{subject to} & \quad & \boldsymbol{1}^{T} x = 1 \\
& \quad & x \succeq 0
\end{alignat*}
$$
The second one is basically Least Squares constrained to the Unit Simplex.
There is no closed form solution to that but it can be solved using Projected Gradient Descent.
The reason is that there efficient techniques to project onto the Simplex.
For a full solution you can find in my answer to Least Squares with Unit Simplex Constraint.
Best Answer
The problem is given by:
$$\begin{aligned} \arg \min_{x} \quad & \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \gamma {\left\| x \right\|}_{1} \\ \text{subject to} \quad & \boldsymbol{1}^{T} x = b \end{aligned}$$
The Lagrangian is given by:
$$ L \left( x, \beta \right) = \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \gamma {\left\| x \right\|}_{1} + \beta \left( \boldsymbol{1}^{T} x - b \right) $$
The solution must obey KKT Conditions (The problem is Convex and Slater Conditions are satisfied) which are given by:
$$\begin{align*} \nabla L \left( x, \beta \right) = x - y + \gamma \partial {\left\| x \right\|}_{1} + \beta \boldsymbol{1} & = 0 & \text{(1)} \\ \boldsymbol{1}^{T} x & = b & \text{(2)} \\ \end{align*}$$
From (1) is is clear that $ x = \operatorname{Prox}_{ \gamma {\left\| \cdot \right\|}_{1} } \left( y - \beta \boldsymbol{1} \right) $ (As a shifted solution to the $ {L}_{1} $ Prox). Using (2) one would get:
$$ \boldsymbol{1}^{T} \operatorname{Prox}_{ \gamma {\left\| \cdot \right\|}_{1} } \left( y - \beta \boldsymbol{1} \right) = b \Rightarrow b - \boldsymbol{1}^{T} \operatorname{Prox}_{ \gamma {\left\| \cdot \right\|}_{1} } \left( y - \beta \boldsymbol{1} \right) = 0 $$
Using the explicit solution of $ \operatorname{Prox}_{ \gamma {\left\| \cdot \right\|}_{1} } \left( \cdot \right) $ as the Soft Threshold operator yields:
$$ g \left( \beta \right) = \sum_{i = 1}^{n} \operatorname{sign} \left( {y}_{i} - \beta \right) { \left( \left| {y}_{i} - \beta \right| - \gamma \right) }_{+} - b $$
Now the problem becomes finding the root of the function $ g \left( \beta \right) $ and the plug it into the Prox.
This is (The function $ g \left( \beta \right) $) a monotonic decreasing function of $ \beta $ which we're looking for its zero (Which solves the problem).
One could solve this within any 1D solver within the range $ \left[ \max(y) + b, \min(y) - b \right] $ which must contain zero.
More efficient approach is based on the fast this is a Piece Wise Linear function (Of $ \beta $) with break points at $ \left| {y}_{i} - \beta \right| = \gamma $ (Hence there are $ 2 n $ points).
Hence by utilizing Bi-Section method which moves the points within the sorted $ 2 n $ break points one could easily find the section which the function has constant slope within it and have the zero value in it.
Let's mark this section by $ \left[ {\beta}_{min}, {\beta}_{max} \right] $, then $ \forall {\beta}_{i}, {\beta}_{j} \in \left[ {\beta}_{min}, {\beta}_{max} \right] : \; \operatorname{sign} \left( y - {\beta}_{i} \right) = \operatorname{sign} \left( y - {\beta}_{j} \right) = e $. This also implies that the support, $ \mathcal{S} = \left\{ i \mid { \left( \left| {y}_{i} - \beta \right| - \gamma \right) }_{+} \neq 0 \right\} $ is constant within this section (As otherwise a new break point would be created).
This means the equation becomes:
$$\begin{aligned} 0 & = \sum_{i \in \mathcal{S}} {e}_{i} \left( \left| {y}_{i} - \beta \right| - \gamma \right) - b & \text{} \\ & = \sum_{i \in \mathcal{S}} {y}_{i} - \beta - \gamma {e}_{i} - b & \text{ $ {e}_{i} \left| {y}_{i} - \beta \right| = \operatorname{sign} \left( {y}_{i} - \beta \right) \left| {y}_{i} - \beta \right|= {y}_{i} - \beta $ } \\ & \Rightarrow \sum_{i \in \mathcal{S}} \beta = \sum_{i \in \mathcal{S}} {y}_{i} - \sum_{i \in \mathcal{S}} \gamma {e}_{i} - b & \text{} \\ & \Rightarrow \beta = \frac{1}{ \left| \mathcal{S} \right| } \left( \sum_{i \in \mathcal{S}} \left( {y}_{i} - \gamma {e}_{i} \right) - b \right) \end{aligned}$$
The full MATLAB code with CVX validation is available in my StackExchnage Mathematics Q2886713 GitHub Repository.
Remark
Knowing the solution to the above problem (Deriving the Prox Operator) gives us an efficient method (Proximal Gradient Descent) to solve:
$$\begin{aligned} \arg \min_{x} \quad & \frac{1}{2} {\left\| A x - b \right\|}_{2}^{2} + \gamma {\left\| x \right\|}_{1} \\ \text{subject to} \quad & \boldsymbol{1}^{T} x = b \end{aligned}$$
With the ADMM framework we can even tackle more general cases.
Reference
The solution is based on Efficient Solvers for Sparse Subspace Clustering.