If you want to find the proximal operator of $\|x\|_{\infty}$, you don't want to compute the subgradient directly. Rather, as the previous answer mentioned, we can use Moreau decomposition:
$$ v = \textrm{prox}_{f}(v) + \textrm{prox}_{f^*}(v)$$
where $f^*$ is the convex conjugate, given by:
$$ f^*(x) = \underset{y}{\sup}\;(x^Ty - f(y))$$
In the case of norms, the convex conjugate is an indicator function based on the dual norm, i.e. if $f(x) = \|x\|_p$, for $p \geq 1$, then $f^*(x) = 1_{\{\|x\|_q \leq 1\}}(x)$, where $1/p + 1/q = 1$, and the indicator function is:
\begin{equation}
1_S(x)=\begin{cases}
0, & \text{if $x \in S$}.\\
\infty, & \text{if $x \notin S$}.
\end{cases}
\end{equation}
For your particular question, $f(x) = \|x\|_{\infty}$, so $f^*(x) = 1_{\{\|x\|_1\leq 1\}}(x)$.
We know
$$\textrm{prox}_{f}(x) = x - \textrm{prox}_{f^*}(x)$$
Thus we need to find
$$\textrm{prox}_{f^*}(x) = \underset{z}{\arg\min} \; \left(1_{\{\|z\|_1 \leq 1\}} + \|z - x\|_2^2 \right)$$
But this is simply projection onto the $L_1$ ball, thus the prox of the infinity norm is given by:
$$ \textrm{prox}_{\|\cdot\|_{\infty}}(x) = x - \textrm{Proj}_{\{\|\cdot\|_1 \leq 1\}}(x)$$
The best reference for this is Neal Parikh, Stephen Boyd - Proximal Algorithms.
I'll attempt to explain the intuition here.
There may be many affine minorants of $h$ with a given slope $y$, but we only care about the best one:
\begin{align}
&h(x) \geq \langle y , x \rangle - \alpha \quad \text{for all } x \\
\iff & \alpha \geq \langle y, x \rangle - h(x) \quad \text{for all } x \\
\iff & \alpha \geq \sup_x \, \langle y, x \rangle - h(x) \\
\iff & \alpha \geq h^*(y).
\end{align}
Thus, the best choice of $\alpha$ is $h^*(y)$.
(If there is no affine minorant of $h$ with slope $y$, then $h^*(y) = \infty$.)
Suppose that
\begin{equation}
v \in \partial h(u).
\end{equation}
This means: there exists some affine minorant of $h$ with slope $v$ which is exact at $u$.
Of all affine minorants of $h$ with slope $v$, the best one (the closest one) is $a(x) = \langle v, x \rangle - h^*(v)$.
Since $a$ is the best affine minorant of $h$ with slope $v$, and since some affine minorant with slope $v$ is exact at $u$, it follows that $a$ is exact at $u$:
\begin{equation}
h(u) = \langle v, u \rangle - h^*(v)
\end{equation}
Otherwise $a$ would not be the best.
Hence
\begin{align}
h^*(v) &= \langle u,v \rangle - h(u) \\
&= \langle u, v \rangle - h^{**}(u)
\end{align}
and we know that $\langle u, v \rangle - h^{**}(u)$ is an affine minorant of $h^*$.
Thus we have found an affine minorant of $h^*$ with slope $u$ which is exact at $v$. This means that
\begin{equation}
u \in \partial h^*(v).
\end{equation}
In summary, note the beautiful symmetry that allowed our key step:
\begin{equation}
h(u) = \langle v, u \rangle - h^*(v) \qquad \text{ " $v$ is a subgradient of $h$ "}
\end{equation}
becomes
\begin{equation}
h^*(v) = \langle u, v \rangle - h(u) \qquad \text{ " $u$ is a subgradient of $h^*$ "}.
\end{equation}
Best Answer
The previous answer contained a crucial mistake (thanks to the users in the comments for pointing it out) and became a mess of edits, so here's a new, correct one. Denote $\|x\|_{2,w}^2 = \sum_{i=1}^n w_ix_i^2$. Define $$f(x) = \lambda\sqrt{ \sum_{i = 1}^{n} {w}_{i} {x}_{i}^{2} } + \frac{1}{2} {\left\| x - y \right\|}_{2}^{2}.$$ This is a convex function, being the sum of a norm and a scaled version of the $\ell_2$ squared norm. It is not differentiable everywhere, but it is continuous - so we can essentially replace the gradient by the subgradient, which is defined as $$\partial f(x) = \{v\in\mathbb{R}^n \ | \ f(y) \geq f(x) + \langle v, y-x\rangle \text{ for all $y$}\}.$$ Then standard facts from convex analysis tell us that:
Now we compute the subgradient of $f(x)$. When $x\neq0$, both summands are differentiable and we obtain the condition $$ \partial f(x) = \frac{\lambda}{\|x\|_{2,w}}Wx + (y-x).$$ where $W$ is a diagonal matrix with the $w_i$'s as entries.
The case $x=0$ is more interesting. The subgradient of the weighted $\ell_2$ norm at $0$ is by definition $$\{v \ | \ \langle v, y\rangle \leq \lambda \|y\|_{2,w}\text{ for all $y$}\}$$ which exactly means $\|v\|_{2,w}^*\leq \lambda$, where $\|\cdot\|_{2,w}^*$ is the dual norm to the weighted $\ell_2$ norm, which can easily be seen to be (use Cauchy-Schwarz) $\|y\|_{2,w}^* = \sqrt{y^T W^{-1}y}$. So the total subgradient at zero is $$\partial f(0) = \{ v - y \ | \ \|v\|_{2,w}^*\leq \lambda\}$$.
Now we must see when $0\in\partial f(x)$.