You have a likelihood function
$$
L(\lambda) = (\lambda e^{-\lambda x_1})\cdots(\lambda e^{-\lambda x_n}) = \lambda^n e^{-\lambda(x_1+\cdots+x_n)}, \tag 1
$$
and
$$
\ell(\lambda)=\log L(\lambda) = n\log\lambda -\lambda(x_1+\cdots+x_n) \tag 2
$$
The density of the uniform distribution on the interval $\lambda\in[u_0,u_1]$ does not depend on $\lambda$. Therefore the posterior density is proportional to $(1)$ on the interval $[u_0,u_1]$. The maximum value of the posterior density therefore occurs at the point in the interval $[u_0,u_1]$ where $(1)$ attains its maximum value. Since $(2)$ is an increasing function of $(1)$, that is the same as the point in the interval $[u_0,u_1]$ where $(2)$ attains its maximum. We have
$$
\ell'(\lambda) = \frac n \lambda - (x_1+\cdots+x_n)\quad\begin{cases} >0 & \text{if }0<\lambda < \bar x = (x_1+\cdots+x_n)/n, \\[6pt]
=0 & \text{if } \lambda=\bar x, \\[6pt]
<0 & \text{if } \lambda>\bar x. \end{cases}
$$
Therefore the maximum value is attained at
$$
\lambda = \begin{cases} u_0 & \text{if } u_0 > \bar x, \\[6pt]
\bar x & \text{if } u_0\le\bar x\le u_1, \\[6pt]
u_1 & \text{if }u_1<\bar x. \end{cases}
$$
If you like, you can write this as
$$
\lambda = (u_0 \vee \bar x) \wedge u_1 = u_0 \vee (\bar x \wedge u_1)
$$
where $a\vee b$ is whichever of $a,b$ is bigger and $a\wedge b$ is whichever is smaller.
The problem is given by:
$$\begin{aligned}
\arg \min_{x} \quad & \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \gamma {\left\| x \right\|}_{1} \\
\text{subject to} \quad & \boldsymbol{1}^{T} x = b
\end{aligned}$$
The Lagrangian is given by:
$$ L \left( x, \beta \right) = \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \gamma {\left\| x \right\|}_{1} + \beta \left( \boldsymbol{1}^{T} x - b \right) $$
The solution must obey KKT Conditions (The problem is Convex and Slater Conditions are satisfied) which are given by:
$$\begin{align*}
\nabla L \left( x, \beta \right) = x - y + \gamma \partial {\left\| x \right\|}_{1} + \beta \boldsymbol{1} & = 0 & \text{(1)} \\
\boldsymbol{1}^{T} x & = b & \text{(2)} \\
\end{align*}$$
From (1) is is clear that $ x = \operatorname{Prox}_{ \gamma {\left\| \cdot \right\|}_{1} } \left( y - \beta \boldsymbol{1} \right) $ (As a shifted solution to the $ {L}_{1} $ Prox). Using (2) one would get:
$$ \boldsymbol{1}^{T} \operatorname{Prox}_{ \gamma {\left\| \cdot \right\|}_{1} } \left( y - \beta \boldsymbol{1} \right) = b \Rightarrow b - \boldsymbol{1}^{T} \operatorname{Prox}_{ \gamma {\left\| \cdot \right\|}_{1} } \left( y - \beta \boldsymbol{1} \right) = 0 $$
Using the explicit solution of $ \operatorname{Prox}_{ \gamma {\left\| \cdot \right\|}_{1} } \left( \cdot \right) $ as the Soft Threshold operator yields:
$$ g \left( \beta \right) = \sum_{i = 1}^{n} \operatorname{sign} \left( {y}_{i} - \beta \right) { \left( \left| {y}_{i} - \beta \right| - \gamma \right) }_{+} - b $$
Now the problem becomes finding the root of the function $ g \left( \beta \right) $ and the plug it into the Prox.
This is (The function $ g \left( \beta \right) $) a monotonic decreasing function of $ \beta $ which we're looking for its zero (Which solves the problem).
One could solve this within any 1D solver within the range $ \left[ \max(y) + b, \min(y) - b \right] $ which must contain zero.
More efficient approach is based on the fast this is a Piece Wise Linear function (Of $ \beta $) with break points at $ \left| {y}_{i} - \beta \right| = \gamma $ (Hence there are $ 2 n $ points).
Hence by utilizing Bi-Section method which moves the points within the sorted $ 2 n $ break points one could easily find the section which the function has constant slope within it and have the zero value in it.
Let's mark this section by $ \left[ {\beta}_{min}, {\beta}_{max} \right] $, then $ \forall {\beta}_{i}, {\beta}_{j} \in \left[ {\beta}_{min}, {\beta}_{max} \right] : \; \operatorname{sign} \left( y - {\beta}_{i} \right) = \operatorname{sign} \left( y - {\beta}_{j} \right) = e $. This also implies that the support, $ \mathcal{S} = \left\{ i \mid { \left( \left| {y}_{i} - \beta \right| - \gamma \right) }_{+} \neq 0 \right\} $ is constant within this section (As otherwise a new break point would be created).
This means the equation becomes:
$$\begin{aligned}
0 & = \sum_{i \in \mathcal{S}} {e}_{i} \left( \left| {y}_{i} - \beta \right| - \gamma \right) - b & \text{} \\
& = \sum_{i \in \mathcal{S}} {y}_{i} - \beta - \gamma {e}_{i} - b & \text{ $ {e}_{i} \left| {y}_{i} - \beta \right| = \operatorname{sign} \left( {y}_{i} - \beta \right) \left| {y}_{i} - \beta \right|= {y}_{i} - \beta $ } \\
& \Rightarrow \sum_{i \in \mathcal{S}} \beta = \sum_{i \in \mathcal{S}} {y}_{i} - \sum_{i \in \mathcal{S}} \gamma {e}_{i} - b & \text{} \\
& \Rightarrow \beta = \frac{1}{ \left| \mathcal{S} \right| } \left( \sum_{i \in \mathcal{S}} \left( {y}_{i} - \gamma {e}_{i} \right) - b \right)
\end{aligned}$$
The full MATLAB code with CVX validation is available in my StackExchnage Mathematics Q2886713 GitHub Repository.
Remark
Knowing the solution to the above problem (Deriving the Prox Operator) gives us an efficient method (Proximal Gradient Descent) to solve:
$$\begin{aligned}
\arg \min_{x} \quad & \frac{1}{2} {\left\| A x - b \right\|}_{2}^{2} + \gamma {\left\| x \right\|}_{1} \\
\text{subject to} \quad & \boldsymbol{1}^{T} x = b
\end{aligned}$$
With the ADMM framework we can even tackle more general cases.
Reference
The solution is based on Efficient Solvers for Sparse Subspace Clustering.
Best Answer
The problem is given by (Adding factor for simplicity):
$$\begin{aligned} \arg \min_{y} \frac{1}{2} {\left( x - y \right)}^{2} + \frac{\lambda}{2} \phi \left( y \right) = \arg \min_{y} \frac{1}{2} {\left( x - y \right)}^{2} - \frac{\lambda}{2} \log \left( \frac{\gamma}{ \pi \left( {y}^{2} + {\gamma}^{2} \right) } \right) \\ \end{aligned}$$
Looking at the derivative (Stationary point):
$$\begin{aligned} 0 & = \frac{d}{d y} \left( \frac{1}{2} {\left( x - y \right)}^{2} - \frac{\lambda}{2} \log \left( \frac{\gamma}{ \pi \left( {y}^{2} + {\gamma}^{2} \right) } \right) \right) && \text{Definition of stationary point} \\ & = \frac{1}{2} \frac{d}{d y} {\left( y - x \right)}^{2} - \frac{\lambda}{2} \frac{d}{d y} \left( \log \left( \frac{\gamma}{ \pi \left( {y}^{2} + {\gamma}^{2} \right) } \right) \right) && \text{} \\ & = y - x - \frac{\lambda}{2} \frac{d}{d y} \left( \log \left( \frac{\gamma}{ \pi \left( {y}^{2} + {\gamma}^{2} \right) } \right) \right) && \text{} \\ & = y - x + \frac{\lambda}{2} \left( \frac{d}{d y} \log \left( {y}^{2} + {\gamma}^{2} \right) \right) && \text{} \\ & = y - x + \frac{\lambda}{2} \frac{1}{{y}^{2} + {\gamma}^{2}} \frac{d}{d y} \left( {y}^{2} + {\gamma}^{2} \right) && \text{} \\ & = y - x + \lambda \frac{y}{{y}^{2} + {\gamma}^{2}} && \text{} \\ & = {y}^{3} - x {y}^{2} + \left( \lambda + {\gamma}^{2} \right) y - x {\gamma}^{2} \end{aligned}$$
The above is a cubic polynomial which has a closed form solution.