You are right that you can think of this problem in terms of Poisson processes. The first arrival time after $t = 0$ as well as the inter-arrival times in a Poisson process of rate $1$ are independent $\text{Exp}(1)$
random variables and so $Y_1$, $Y_1 + Y_2$, $\ldots$, $Y_1 + Y_2 + \cdots + Y_{n+1}$ can be taken to be the times of the first, second, $\ldots$, $(n+1)$-th arrivals after $t = 0$ in the process. The random variables
$\frac{Y_1}{\sum_i^{n+1} Y_i}, \frac{Y_1+Y_2}{\sum_i^{n+1} Y_i}, \dots, \frac{Y_1+\dots+Y_n}{\sum_i^{n+1} Y_i}$ that you are looking at are the first $n$ arrival times "normalized" to a unit interval.
For $0 < t_1 < t_2 < \dots < t_n < 1$, the conditional probability
that there is one arrival in each interval $(t_i, t_i + \Delta t_i)$
and none in the remaining time of total length $(1 - \sum_i \Delta t_i)$
given that there are $n$ arrivals in $(0, 1)$ is approximately
$$
\begin{align*}
\frac{\exp(-(1 - \sum_i \Delta t_i)) \prod_{i=1}^n \exp(-\Delta t_i)\Delta t_i/1!}{\exp(-1)\frac{1^n}{n!}} &= n! \Delta t_1\Delta t_2 \cdots \Delta t_n\\
&= f_{U_{(1)}, \dots, U_{(n)}}(t_1, t_2, \ldots , t_n)\Delta t_1\Delta t_2 \cdots \Delta t_n
\end{align*}
$$
I think a more easy to follow (and simpler) proof would be to use a different change of variables.
We have the joint density of the order statistics $(U_1=X_{(1)},\cdots,U_n=X_{(n)})$
$$f_{\mathbf U}(u_1,\cdots,u_n)=n!\exp\left[-\sum_{i=1}^nu_i+n\theta\right]\mathbf1_{\theta<u_1<u_2<\cdots<u_n}$$
Now transform $(U_1,\cdots,U_n)\to(Y_1,\cdots,Y_n)$ such that $Y_i=(n-i+1)(U_i-U_{i-1})$ for all $i=1,2\cdots,n$ and taking $U_0=\theta$.
It follows that $\sum_{i=1}^nu_i=\sum_{i=1}^ny_i+n\theta$. The jacobian determinant comes out as $n!$.
So you get the joint density of $(Y_1,\cdots,Y_n)$
$$f_{\mathbf Y}(y_1,\cdots,y_n)=\exp\left[-\sum_{i=1}^ny_i\right]\mathbf1_{y_1,\cdots,y_n>0}$$
Not surprisingly, the spacings of successive order statistics from an exponential sample come out as independent . In fact, the $Y_i$'s are i.i.d exponential with mean $1$ for all $i=1,2,\cdots,n$.
This implies $2Y_i\stackrel{\text{i.i.d}}{\sim}\chi^2_2$ for all $i=1,2,\cdots,n$
So we have two independent variables $2Y_1$ and $\sum_{i=2}^n2Y_i$. Both have the chi-square distribution --- the former with $2$ degrees of freedom and the latter with $2n-2$ degrees of freedom.
It is now a matter of time to see that $2Y_1=2n(X_{(1)}-\theta)$ and $2\sum_{i=2}^nY_i=2\sum_{i=2}^n(X_{(i)}-X_{(1)})$.
Best Answer
Thanks to the memoryless property of the exponential distribution, the difference between $Y_{(n)}$ and $Y_{(1)}$ is independent of the actual value of $Y_{(1)}$. So to find the distribution of $R = Y_{(n)} - Y_{(1)}$ we can operate under the assumption that $Y_{(1)} = 0$.
Then $P(R < r)$ is the probability that the remaining $n-1$ sample observations all fall in the range $(0,r)$. This is because (under the assumption that the smallest observation is $0$), $R < r$ means that the largest observation must be smaller than $r$. But in order for that to happen, all $n-1$ of the remaining sample observations must be smaller than $r$. (In fact, the two statements are equivalent.)
Thus $$ \begin{align} P(R < r) &= P(n-1 \text{ independent sample observations are all smaller than } r) \\ &= \left(\int_0^r e^{-x} \, dx\right)^{n-1} \\ &= (1-e^{-r})^{n-1}.\end{align}$$ Differentiating to obtain the pdf of $R$, we get $$f_R(r) = (n-1)(1-e^{-r})^{n-2} e^{-r}.$$