Yes, $f(x\mid\theta)$ is an even function of $x$, but how can the pdf 'change' completely in your calculations? You carried out calculations for an exponential density whereas you are given a Laplace distribution.
Indeed, the sample is drawn from a Laplace distribution with scale parameter $1/\theta$ and location parameter zero. We can conclude that the joint density $f_{\theta}$ belongs to the one-parameter exponential family. Using this, we can find a complete sufficient statistic for $\theta$.
Due to independence, joint density of $\mathbf X=(X_1,X_2,\cdots,X_n)$ is
\begin{align}
f_{\theta}(\mathbf x)&=\prod_{i=1}^n\frac{\theta}{2}e^{-\theta\,|x_i|}
\\&=\left(\frac{\theta}{2}\right)^ne^{-\theta\sum_{i=1}^n|x_i|}
\\&=\exp\left[-\theta\sum_{i=1}^n|x_i|+n\ln\left(\frac{\theta}{2}\right)\right]\quad,\,\mathbf x\in\mathbb R^n\,,\,\theta>0
\end{align}
Clearly, a complete sufficient statistic for the family of distributions $\{f_{\theta}:\theta>0\}$ is $$T(\mathbf X)=\sum_{i=1}^n|X_i|$$
By Lehmann-Scheffe theorem, an unbiased estimator of $\theta$ based on $T$ will be the UMVUE of $\theta$.
It is a simple exercise to show that $|X_i|\sim\mathcal{Exp}(\theta)$ independently for each $i$, where $\theta$ denotes rate of the distribution. As such, we have $T\sim\mathcal{Ga}(\theta,n)$ with density $$g(t)=\frac{\theta^n e^{-\theta t}t^{n-1}}{\Gamma(n)}\mathbf1_{t>0}$$
We find that
\begin{align}
E\left(\frac{1}{T}\right)&=\int_0^{\infty}\frac{1}{t}\frac{\theta^n e^{-\theta t}t^{n-1}}{\Gamma(n)}\,dt
\\&=\frac{\theta^n}{\Gamma(n)}\frac{\Gamma(n-1)}{\theta^{n-1}}
\\&=\frac{\theta}{n-1}
\end{align}
So the UMVUE of $\theta$ is $$\frac{n-1}{T}=\frac{n-1}{\sum_{i=1}^n|X_i|}$$
It turns out that both approaches (my initial attempt and another based on suggestions in the comment section) in my original post give the same answer. I will outline both methods here for a complete answer to the question.
Here, $\text{Gamma}(n,\theta)$ means the gamma density $f(y)=\frac{\theta^n}{\Gamma(n)}e^{-\theta y}y^{n-1}\mathbf1_{y>0}$ where $\theta,n>0$, and $\text{Exp}(\theta)$ denotes an exponential distribution with mean $1/\theta$, ($\theta>0$). Clearly, $\text{Exp}(\theta)\equiv\text{Gamma}(1,\theta)$.
Since $T=\sum_{i=1}^n\ln X_i$ is complete sufficient for $\theta$ and $\mathbb E(X_1)=\frac{\theta}{1+\theta}$, by the Lehmann-Scheffe theorem $\mathbb E(X_1\mid T)$ is the UMVUE of $\frac{\theta}{1+\theta}$.
So we have to find this conditional expectation.
We note that $X_i\stackrel{\text{i.i.d}}{\sim}\text{Beta}(\theta,1)\implies-\ln X_i\stackrel{\text{i.i.d}}{\sim}\text{Exp}(\theta)\implies-T\sim\text{Gamma}(n,\theta)$.
Method I:
Let $U=-\ln X_1$ and $V=-\sum_{i=2}^n\ln X_i$, so that $U$ and $V$ are independent. Indeed, $U\sim\text{Exp}(\theta)$ and $V\sim\text{Gamma}(n-1,\theta)$, implying $U+V\sim\text{Gamma}(n,\theta)$.
So, $\mathbb E(X_1\mid \sum_{i=1}^n\ln X_i=t)=\mathbb E(e^{-U}\mid U+V=-t)$.
Now we find the conditional distribution of $U\mid U+V$.
For $n>1$ and $\theta>0$, joint density of $(U,V)$ is
\begin{align}f_{U,V}(u,v)&=\theta e^{-\theta u}\mathbf1_{u>0}\frac{\theta^{n-1}}{\Gamma(n-1)}e^{-\theta v}v^{n-2}\mathbf1_{v>0}\\&=\frac{\theta^n}{\Gamma(n-1)}e^{-\theta(u+v)}v^{n-2}\mathbf1_{u,v>0}\end{align}
Changing variables, it is immediate that the joint density of $(U,U+V)$ is $$f_{U,U+V}(u,z)=\frac{\theta^n}{\Gamma(n-1)}e^{-\theta z}(z-u)^{n-2}\mathbf1_{0<u<z}$$
Let $f_{U+V}(\cdot)$ be the density of $U+V$. Thus conditional density of $U\mid U+V=z$ is \begin{align}f_{U\mid U+V}(u\mid z)&=\frac{f_{U,U+V}(u,z)}{f_{U+V}(z)}\\&=\frac{(n-1)(z-u)^{n-2}}{z^{n-1}}\mathbf1_{0<u<z}\end{align}
Therefore, $\displaystyle\mathbb E(e^{-U}\mid U+V=z)=\frac{n-1}{z^{n-1}}\int_0^z e^{-u}(z-u)^{n-2}\,\mathrm{d}u$.
That is, the UMVUE of $\frac{\theta}{1+\theta}$ is $\displaystyle\mathbb E(X_1\mid T)=\frac{n-1}{(-T)^{n-1}}\int_0^{-T} e^{-u}(-T-u)^{n-2}\,\mathrm{d}u\tag{1}$
Method II:
As $T$ is a complete sufficient statistic for $\theta$, any unbiased estimator of $\frac{\theta}{1+\theta}$ which is a function of $T$ will be the UMVUE of $\frac{\theta}{1+\theta}$ by the Lehmann-Scheffe theorem. So we proceed to find the moments of $-T$, whose distribution is known to us. We have,
$$\mathbb E(-T)^r=\int_0^\infty y^r\theta^n\frac{e^{-\theta y}y^{n-1}}{\Gamma(n)}\,\mathrm{d}y=\frac{\Gamma(n+r)}{\theta^r\,\Gamma(n)},\qquad n+r>0$$
Using this equation we obtain unbiased estimators (and UMVUE's) of $1/\theta^r$ for every integer $r\ge1$.
Now for $\theta>1$, we have $\displaystyle\frac{\theta}{1+\theta}=\left(1+\frac{1}{\theta}\right)^{-1}=1-\frac{1}{\theta}+\frac{1}{\theta^2}-\frac{1}{\theta^3}+\cdots$
Combining the unbiased estimators of $1/\theta^r$ we obtain $$\mathbb E\left(1+\frac{T}{n}+\frac{T^2}{n(n+1)}+\frac{T^3}{n(n+1)(n+2)}+\cdots\right)=1-\frac{1}{\theta}+\frac{1}{\theta^2}-\frac{1}{\theta^3}+\cdots$$
That is, $$\mathbb E\left(\sum_{r=0}^\infty \frac{T^r}{n(n+1)...(n+r-1)}\right)=\frac{\theta}{1+\theta}$$
So assuming $\theta>1$, the UMVUE of $\frac{\theta}{1+\theta}$ is $g(T)=\displaystyle\sum_{r=0}^\infty \frac{T^r}{n(n+1)...(n+r-1)}\tag{2}$
I am not certain about the case $0<\theta<1$ in the second method.
According to Mathematica, equation $(1)$ has a closed form in terms of the incomplete gamma function. And in equation $(2)$, we can express the product $n(n+1)(n+2)...(n+r-1)$ in terms of the usual gamma function as $n(n+1)(n+2)...(n+r-1)=\frac{\Gamma(n+r)}{\Gamma(n)}$. This perhaps provides the apparent connection between $(1)$ and $(2)$.
Using Mathematica I could verify that $(1)$ and $(2)$ are indeed the same thing.
Best Answer
Some more details than the other answers. The probability distribution of $Y$ will be a mixture distribution with two components, one continuous and one discrete. To find the distribution of $Y$, write $$ \DeclareMathOperator{\P}{\mathbb{P}} \P(Y\in A) = \P(Y\in A \mid Y<1)\P(Y<1)+\P(Y\in A\mid Y=1)\P(Y=1)\\ =\frac1\theta\cdot\int_{A\cap [0,1]} \; dy + \frac{\theta-1}{\theta}\cdot \mathbb{1}(Y\in A) $$ leading directly to the likelihood $$ L_Y(\theta) = \left(\frac{\theta-1}{\theta}\right)^{n-r}\cdot \left( \frac1\theta\right)^r $$ showing that $R$ alone is a sufficient statistic. Now the usual procedure leads to the maximum likelihood estimator $$ \hat{\theta}_{ML}= 1+ \frac{n-r}r $$ which also seems intuitively reasonable.
Some more details Likelihood defined in this situation with a mixture distribution might be new to many, so some details might help. But first, this is discussed onsite earlier at Maximum likelihood function for mixed type distribution and Weighted normal errors regression with censoring. We will need some concepts from measure theory ... let $\mu^*$ be the measure given by $$ \mu^*(A)= \mu(A) + \mathbb{1}\{ 1\in A\} $$ where $\mu$ is Leb (Lebesgue) measure and the second term is an atom at $1$. Now, the distribution of $Y$ can be written as a density (in the sense of the Radon-Nikodym theorem) with respect to $\mu^*$. This looks like $$ \P(Y\in A) =\int_A f(y) \; \mu^*(dy) =\int_A f(y) \; \mu(dy) + \int_A f(y)\; d\delta_1(y) $$ where $\delta_1$ is the atom at $1$. The Radon-Nikodym density $f$ can be written $$ f(y)= \frac1\theta \mathbb{1}\{0\le y < 1\} + \frac{\theta-1}\theta\cdot \mathbb{1}\{y=1\} $$ Note that the first term in the density only contributes to the integral with respect to $d\mu$, the second term only to the integral with respect to $d\delta_1$.
So, defining the likelihood using the RN-derivative $f$, we get $$ L_Y(\theta)=\prod_i^n f(y_i)=\prod_i^n \left\{ \frac1\theta \mathbb{1}\{0\le y_i < 1\} + \frac{\theta-1}\theta\cdot \mathbb{1}\{y_i=1\} \right\} $$ and simplifying gives the likelihood above.