It turns out that both approaches (my initial attempt and another based on suggestions in the comment section) in my original post give the same answer. I will outline both methods here for a complete answer to the question.
Here, $\text{Gamma}(n,\theta)$ means the gamma density $f(y)=\frac{\theta^n}{\Gamma(n)}e^{-\theta y}y^{n-1}\mathbf1_{y>0}$ where $\theta,n>0$, and $\text{Exp}(\theta)$ denotes an exponential distribution with mean $1/\theta$, ($\theta>0$). Clearly, $\text{Exp}(\theta)\equiv\text{Gamma}(1,\theta)$.
Since $T=\sum_{i=1}^n\ln X_i$ is complete sufficient for $\theta$ and $\mathbb E(X_1)=\frac{\theta}{1+\theta}$, by the Lehmann-Scheffe theorem $\mathbb E(X_1\mid T)$ is the UMVUE of $\frac{\theta}{1+\theta}$.
So we have to find this conditional expectation.
We note that $X_i\stackrel{\text{i.i.d}}{\sim}\text{Beta}(\theta,1)\implies-\ln X_i\stackrel{\text{i.i.d}}{\sim}\text{Exp}(\theta)\implies-T\sim\text{Gamma}(n,\theta)$.
Method I:
Let $U=-\ln X_1$ and $V=-\sum_{i=2}^n\ln X_i$, so that $U$ and $V$ are independent. Indeed, $U\sim\text{Exp}(\theta)$ and $V\sim\text{Gamma}(n-1,\theta)$, implying $U+V\sim\text{Gamma}(n,\theta)$.
So, $\mathbb E(X_1\mid \sum_{i=1}^n\ln X_i=t)=\mathbb E(e^{-U}\mid U+V=-t)$.
Now we find the conditional distribution of $U\mid U+V$.
For $n>1$ and $\theta>0$, joint density of $(U,V)$ is
\begin{align}f_{U,V}(u,v)&=\theta e^{-\theta u}\mathbf1_{u>0}\frac{\theta^{n-1}}{\Gamma(n-1)}e^{-\theta v}v^{n-2}\mathbf1_{v>0}\\&=\frac{\theta^n}{\Gamma(n-1)}e^{-\theta(u+v)}v^{n-2}\mathbf1_{u,v>0}\end{align}
Changing variables, it is immediate that the joint density of $(U,U+V)$ is $$f_{U,U+V}(u,z)=\frac{\theta^n}{\Gamma(n-1)}e^{-\theta z}(z-u)^{n-2}\mathbf1_{0<u<z}$$
Let $f_{U+V}(\cdot)$ be the density of $U+V$. Thus conditional density of $U\mid U+V=z$ is \begin{align}f_{U\mid U+V}(u\mid z)&=\frac{f_{U,U+V}(u,z)}{f_{U+V}(z)}\\&=\frac{(n-1)(z-u)^{n-2}}{z^{n-1}}\mathbf1_{0<u<z}\end{align}
Therefore, $\displaystyle\mathbb E(e^{-U}\mid U+V=z)=\frac{n-1}{z^{n-1}}\int_0^z e^{-u}(z-u)^{n-2}\,\mathrm{d}u$.
That is, the UMVUE of $\frac{\theta}{1+\theta}$ is $\displaystyle\mathbb E(X_1\mid T)=\frac{n-1}{(-T)^{n-1}}\int_0^{-T} e^{-u}(-T-u)^{n-2}\,\mathrm{d}u\tag{1}$
Method II:
As $T$ is a complete sufficient statistic for $\theta$, any unbiased estimator of $\frac{\theta}{1+\theta}$ which is a function of $T$ will be the UMVUE of $\frac{\theta}{1+\theta}$ by the Lehmann-Scheffe theorem. So we proceed to find the moments of $-T$, whose distribution is known to us. We have,
$$\mathbb E(-T)^r=\int_0^\infty y^r\theta^n\frac{e^{-\theta y}y^{n-1}}{\Gamma(n)}\,\mathrm{d}y=\frac{\Gamma(n+r)}{\theta^r\,\Gamma(n)},\qquad n+r>0$$
Using this equation we obtain unbiased estimators (and UMVUE's) of $1/\theta^r$ for every integer $r\ge1$.
Now for $\theta>1$, we have $\displaystyle\frac{\theta}{1+\theta}=\left(1+\frac{1}{\theta}\right)^{-1}=1-\frac{1}{\theta}+\frac{1}{\theta^2}-\frac{1}{\theta^3}+\cdots$
Combining the unbiased estimators of $1/\theta^r$ we obtain $$\mathbb E\left(1+\frac{T}{n}+\frac{T^2}{n(n+1)}+\frac{T^3}{n(n+1)(n+2)}+\cdots\right)=1-\frac{1}{\theta}+\frac{1}{\theta^2}-\frac{1}{\theta^3}+\cdots$$
That is, $$\mathbb E\left(\sum_{r=0}^\infty \frac{T^r}{n(n+1)...(n+r-1)}\right)=\frac{\theta}{1+\theta}$$
So assuming $\theta>1$, the UMVUE of $\frac{\theta}{1+\theta}$ is $g(T)=\displaystyle\sum_{r=0}^\infty \frac{T^r}{n(n+1)...(n+r-1)}\tag{2}$
I am not certain about the case $0<\theta<1$ in the second method.
According to Mathematica, equation $(1)$ has a closed form in terms of the incomplete gamma function. And in equation $(2)$, we can express the product $n(n+1)(n+2)...(n+r-1)$ in terms of the usual gamma function as $n(n+1)(n+2)...(n+r-1)=\frac{\Gamma(n+r)}{\Gamma(n)}$. This perhaps provides the apparent connection between $(1)$ and $(2)$.
Using Mathematica I could verify that $(1)$ and $(2)$ are indeed the same thing.
The second method is not only strange but wrong. The first method is correct. The definition of mixture distribution says that the $\lambda$ and $1-\lambda$ weights are applied to the distribution functions, not to the random variables*. To convince yourself, try both methods. You can find such an example below. The example uses a mixture of Gaussians, where the probability density function is shown in red. As you can see, in the first case the histogram matched the probability density function. In the second case, the histogram of the samples is unimodal vs the target distribution is bimodal, so it's clearly wrong. In fact, we know that sum of Gaussians is Gaussian, and this is what we see on the second plot.
If you wanted to simplify the sampling, you can notice that the number of the samples $n_1$ from $Z_1$ would follow a binomial distribution with a probability of success $\lambda$ and the sample size $N$, so you can draw $n_1$ from the binomial distribution and then take $n_1$ samples for $Z_1$ and $N-n_1$ for $Z_2$ and they would together form a sample from the mixture of size $N$.
* - Think of a trivial example where we multiply Bernoulli distributed random variable by $2$. If you multiplied the values, you have a random variable with two possible values $0$ and $2$ occurring with probabilities $p$ and $1-p$. If you multiplied the probability distribution function by $2$, you would have two values $0$ and $1$ with probabilities that $2p$ and $2(1-p)$ which can be higher than $1$ so invalid. Multiplying the distribution vs the random variable are different things.
Best Answer
This derivation is directly feasible by considering the latent variable representation of a mixture random variable. Each of the $Y_i$ is associated with a latent Bernoulli variable $\xi_i\sim\mathfrak B(\pi_2)$ in the sense that $$Y_i|\xi_i=k\sim\mathfrak Exp(\lambda_{k+1})\qquad k=0,1$$ Therefore, $$Z=\sum_{i=1}^n Y_i$$ can be conditioned upon the vector of latent variables$$\boldsymbol\xi=(\xi_1,\ldots,\xi_n)$$and written as $$ Z|\boldsymbol\xi \sim\sum_{i=1}^n Y_i\big|\boldsymbol\xi \sim\Big(\underbrace{\sum_{i;\,\xi_i=0} Y_i}_{\substack{\text{sum of iid}\\ \mathfrak Exp(\lambda_1)}}+\underbrace{\sum_{i;\,\xi_i=1} Y_i}_{\substack{\text{sum of iid}\\ \mathfrak Exp(\lambda_2)}}\Big)\Big|\boldsymbol\xi $$ This means that, conditional on$$\zeta=\sum_{i=1}^n\xi_i\sim\mathfrak B(n,\pi_2)$$$Z$ is distributed as the sum of a Gamma $\mathfrak G(n-\zeta,\lambda_1)$ and of a Gamma $\mathfrak G(\zeta,\lambda_2)$, i.e., $$ Z|\zeta\sim Z_1+Z_2\qquad Z_1\sim \mathfrak G(n-\zeta,\lambda_1),\ \ Z_2\sim\mathfrak G(\zeta,\lambda_2)\tag{1} $$ The distribution of this sum (1) is itself a signed mixture of Gamma distributions with at most $n$ terms and rates either $\lambda_1$ or $\lambda_2$, as shown in the earlier X validated post of @whuber.¹ Integrating out $\zeta$ (or marginalising in $Z$) leads to a mixture of $n+1$ terms, the weight of the $k$-th term is the Binomial probability$${n\choose k}\pi_2^k\pi_1^{n-k}$$ In conclusion, the distribution of $Z$ can be represented as a signed mixture of Gamma distributions with an order $O(n^2)$ terms.
A more direct approach is to consider the $n$-fold convolution representation of the density of $Z$:
$$f_Z(z) = \int_{\mathbb R^{n-1}} \prod_{i=1}^{n-1} f_Y(y_i) f_Y(z-y_1-\cdots-y_{n-1})\,\text dy_1\cdots\,\text dy_{n-1}$$
and to expand the product of the $n$ sums $f_Y(y_i)=\pi_1 \mathfrak e(y_i|\lambda_1)+\pi_2 \mathfrak e(y_i|\lambda_2)$ into $2^n$ terms, which when regrouping identical convolution integrals again results into a sum of $n+1$ terms,
$$f_Z(z) =\sum_{k=0}^n {n\choose k}\pi_1^k\pi_2^{n-k}\int_{\mathbb R^{n-1}} \underbrace{\prod_{i=1}^k \mathfrak e(y_i|\lambda_1)}_{\substack{\text{leading to}\\ \mathfrak G(k,\lambda_1)}}\,\underbrace{\prod_{i=k+1}^n \mathfrak e(y_i|\lambda_2)}_{\substack{\text{leading to}\\ \mathfrak G(n-k,\lambda_2)}}\,\text dy_1\cdots\,\text dy_{n-1}$$
where $y_n=z-y_1-\cdots-y_{n-1}$.
¹Or equivalently a distribution with a more complex density involving a confluent hypergeometric function $_1F_1$ as shown in the earlier CV post of @Carl.