I am afraid the conclusion is wrong in both cases...
(Asymmetric definition)
What is the distribution of $(Y_1,Y_2-Y_1,\ldots,Y_{n-1}-Y_{n-2},1-Y_{n-1})$?
Let $Z^{(n)}=(Y_1,Y_2-Y_1,\ldots,Y_{n-1}-Y_{n-2},1-Y_{n-1})$, then $Z^{(n)}$ is almost surely in $D_n=H_n\cap(\mathbb R_+)^n$ where $H_n$ is the affine hyperplane of $\mathbb R^n$ of equation $z_1+\cdots+z_n=1$, and uniformly distributed on $D_n$. In particular, $E(Z^{(n)}_k)=1/n$ for every $1\leqslant k\leqslant n$, hence $E(Y_k)=k/n$ for every $1\leqslant k\leqslant n-1$.
(Symmetric definition)
What is the distribution of $(1+Y_1-Y_n,Y_2-Y_1,\ldots,Y_{n-1}-Y_{n-2},Y_n-Y_{n-1})$?
Let $T^{(n)}=(1+Y_1-Y_n,Y_2-Y_1,\ldots,Y_{n-1}-Y_{n-2},Y_n-Y_{n-1})$, then $T^{(n)}=G(Z^{(n+1)})$ where $G:\mathbb R^{n+1}\to\mathbb R^n$ is the affine function defined by $G(z_1,\ldots,z_{n+1})=(z_1+z_{n+1},z_2,\ldots,z_n)$. This yields the distribution of $T^{(n)}$, whose support is $D_n$, and whose density is proportional to $(t_k)_{1\leqslant k\leqslant n}\mapsto t_1$ on $D_n$. In particular $E(T^{(n)}_1)=2/(n+1)$ and $E(T^{(n)}_k)=1/(n+1)$ for every $2\leqslant k\leqslant n$.
I think a more easy to follow (and simpler) proof would be to use a different change of variables.
We have the joint density of the order statistics $(U_1=X_{(1)},\cdots,U_n=X_{(n)})$
$$f_{\mathbf U}(u_1,\cdots,u_n)=n!\exp\left[-\sum_{i=1}^nu_i+n\theta\right]\mathbf1_{\theta<u_1<u_2<\cdots<u_n}$$
Now transform $(U_1,\cdots,U_n)\to(Y_1,\cdots,Y_n)$ such that $Y_i=(n-i+1)(U_i-U_{i-1})$ for all $i=1,2\cdots,n$ and taking $U_0=\theta$.
It follows that $\sum_{i=1}^nu_i=\sum_{i=1}^ny_i+n\theta$. The jacobian determinant comes out as $n!$.
So you get the joint density of $(Y_1,\cdots,Y_n)$
$$f_{\mathbf Y}(y_1,\cdots,y_n)=\exp\left[-\sum_{i=1}^ny_i\right]\mathbf1_{y_1,\cdots,y_n>0}$$
Not surprisingly, the spacings of successive order statistics from an exponential sample come out as independent . In fact, the $Y_i$'s are i.i.d exponential with mean $1$ for all $i=1,2,\cdots,n$.
This implies $2Y_i\stackrel{\text{i.i.d}}{\sim}\chi^2_2$ for all $i=1,2,\cdots,n$
So we have two independent variables $2Y_1$ and $\sum_{i=2}^n2Y_i$. Both have the chi-square distribution --- the former with $2$ degrees of freedom and the latter with $2n-2$ degrees of freedom.
It is now a matter of time to see that $2Y_1=2n(X_{(1)}-\theta)$ and $2\sum_{i=2}^nY_i=2\sum_{i=2}^n(X_{(i)}-X_{(1)})$.
Best Answer
This is not a complete answer to all your questions, but it should provide some insight at the least. I'll give a shot at deriving p.d.f. for the $Y_i$ (distances between consecutive $X_i$). Once we know that, we might be able to use Bayes to find the distribution of $X_i$ given $Y_i$, but I will not address that here.
Given i.i.d. real-valued $X_i$, $i=1,2,\ldots,n$, with probability distribution $f$, one can see that they are equally likely to be in any order, and there are $n!$ possible orders. The probability of a particular ordering is $$ P(X_{k_1}<X_{k_2}<\cdots < X_{k_n}) =\int_{-\infty}^\infty f(x_{k_1}) \int_{x_{k_1}}^\infty f(x_{k_2})\cdots\int_{x_{k_{n-1}}}^\infty f(x_{k_n}) \ dx_{k_n}\cdots \ d x_{k_2}\ d x_{k_1}. $$ Switching the order of integration gives $$ \begin{aligned} P(X_{k_1}<X_{k_2}<\cdots < X_{k_n}) &=\int_{-\infty}^\infty f(x_{k_n}) \cdots \int_{-\infty}^{x_{k_3}} f(x_{k_2}) \int_{-\infty}^{x_{k_2}} f(x_{k_1}) \ dx_{k_1}\ d x_{k_2}\cdots \ d x_{k_n}\\ &=\int_{-\infty}^\infty f(x_{k_n}) \cdots \int_{-\infty}^{x_{k_3}} f(x_{k_2}) F(x_{k_2})\ d x_{k_2}\cdots \ d x_{k_n} \\ &=\int_{-\infty}^\infty f(x_{k_n}) \cdots \int_{0}^{F(x_{k_3})} u\ d u\cdots \ d x_{k_n} . \end{aligned} $$ Note that the substitution $u=F(x)$, $du=F'(x)dx$ has been made, and can be made at each stage. Continuing in the fashion will end up at $\displaystyle \frac{1}{n!}$ for the final value of the integal.
Thus $$ \begin{aligned} P(Y_i<x)&=n! P(X_1<X_2<\cdots<X_i<X_{i+1}<X_i+x|X_1<X_2<\cdots<X_n)\\ &\quad\quad\quad\quad\quad\cdot P(X_1<X_2<\cdots<X_n)\\ &=n!P(\{X_1<X_2<\cdots<X_i<X_{i+1}<X_i+x\} \cap \{X_1<X_2<\cdots<X_n\}) \end{aligned} $$
So we will calculate the probability of the intersection of events $X_1<X_2<\cdots<X_i<X_{i+1}<X_i+x$ and $X_1<X_2<\cdots<X_n$.
We wish to know $F_{Y_i}(x)=P(Y_i<x)=n!P(X_{i+1}-X_i<x)$, thus: $$ \begin{aligned} F_{Y_i}(x)= n!\int_{-\infty}^\infty &f(x_1)\int_{x_1}^\infty f(x_2)\\ &\cdots\int_{x_i}^{x_i+x}f(x_{i+1})\int_{x_{i+1}}^\infty f(x_{i+2})\\ &\quad\quad\quad\quad \cdots\int_{x_{n-1}}^\infty f(x_n) \ dx_n\cdots \ dx_1. \end{aligned} $$ This is the c.d.f. of $Y_i$. You can get the p.d.f. by taking the derivative w.r.t. $x$. This is a complicated integral, and in general, the distribution of $Y_i$ depends on $i$ and $n$. Intuitively, this makes sense as simulating a large number of i.i.d. random variables will result in consecutive values being closer.
For the $X_i$ exponentially distributed with mean $1/\lambda$, and $n=2$, $F_Y(x)=1-e^{-\lambda x}$. The calculation is $$ F_Y(x)= 2\int_0^\infty \lambda e^{-\lambda x_1} \int_{x_1}^{x_1+x} \lambda e^{-\lambda x_2} \ dx_2 \ dx_1. $$
For $n=3$, $F_{Y_2}(x)=1-e^{-\lambda x}$ and $F_{Y_1}(x)=1-e^{-2\lambda x}$. The calculations are $$ F_{Y_2}(x)= 3!\int_0^\infty \lambda e^{-\lambda x_1} \int_{x_1}^{\infty} \lambda e^{-\lambda x_2} \int_{x_2}^{x_2+x} \lambda e^{-\lambda x_3} \ dx_3 \ dx_2 \ dx_1. $$ $$ F_{Y_1}(x)= 3!\int_0^\infty \lambda e^{-\lambda x_1} \int_{x_1}^{x_1+x} \lambda e^{-\lambda x_2} \int_{x_2}^{\infty} \lambda e^{-\lambda x_3} \ dx_3 \ dx_2 \ dx_1. $$
There seems to be a pattern there which could probably be generalized for any $n$ and $i$. I haven't done the analytical calculation, but for any given $n$, $F_{Y_i}(x)=1-e^{-(n-i)\lambda x}$ appears to work. I've confirmed it numerically.
If the $X_i$ are uniformly distributed on the interval $[0,1]$, then $F_{Y_i}(x)=1-(1-x)^n$ for any $i$ and $n$. I performed the analytical calculation for $n=2,3$ and verified the formula numerically. Maybe an induction argument would work to prove it, or there may be some cleaner argument which does requires neither integration nor induction.