Now I hate to be the one to answer my own question, but I feel that in the time it took me to formulate my question in MathJax, I might have arrived at the answer.
First, let's look at why the reduction of degree from two-dimensions to one-dimension for a (joint) sufficient statistic vector for $\theta$ of the Uniform distribution works for symmetrical arguments:
Suppose $X_1,X_2,...,X_n$ is a random sample from the symmetric Uniform distribution $Unif(-\theta,\theta)$. By the factorization theorem, it is easy to verify that the vector $\mathbf Y = (Y_1,Y_2)$ where $Y_1 = X_\left(1\right)$ and $Y_2=X_\left(n\right)$ is a joint sufficient vector of degree two for $\theta$, with $$K_1(Y_1,Y_2;\theta)=(\frac{1}{2})^n \cdot \mathbf 1_{(-\theta,\theta)}(Y_1) \cdot \mathbf 1_{(-\theta,\theta)}(Y_n)$$
From the two indicator functions and from the definition of order statistics, we have that $$-\theta<Y_1<Y_n<\theta \implies \theta>-Y_1 \land \theta>Y_n$$
This allows us to use the maximum function concurrently on $-Y_1$ and $Y_n$ to put a restriction on $\theta$, meaning that this result, $Y^* = max\{-Y_1,Y_n\}$, is such that $$\mathbf 1_{(-\theta,\theta)}(Y_1) \cdot \mathbf 1_{(-\theta,\theta)}(Y_n) = \mathbf 1_{(-\theta,\theta)}(Y^*)$$ is a valid equality.
On the other hand, suppose $X_1,X_2,...,X_n$ is a random sample from the Uniform distribution $Unif(\theta-1,\theta+1)$. By the factorization theorem, it is easy to verify that the vector $\mathbf Y = (Y_1,Y_2)$ where $Y_1 = X_\left(1\right)$ and $Y_2=X_\left(n\right)$ is a joint sufficient vector of degree two for $\theta$, with $$K_1(Y_1,Y_2;\theta)=(\frac{1}{2})^n \cdot \mathbf 1_{(\theta-1,\theta+1)}(Y_1) \cdot \mathbf 1_{(\theta-1,\theta+1)}(Y_n)$$
From the two indicator functions and from the definition of order statistics, we have that $$\theta-1<Y_1<Y_n<\theta+1 \implies Y_1+1>\theta \land Y_n-1<\theta$$
Because now we have $\theta$ sandwiched between two restrictions ("variables", for our purposes) and without the benefit of appealing to the symmetry of the situation, we have no tools available to ourselves to condense the information provided from $Y_1$ and $Y_2$ any further. Thus, we must concede that the joint sufficient statistics $Y_1$ and $Y_n$ are joint minimal sufficient statistics for $\theta$ for a non-symmetric Uniform distribution. On the other hand, we have also shown that $Y^*=max\{Y_1,Y_n\}$ is the single-dimensional and (thus) minimal sufficient sufficient statistic for $\theta$ for a symmetric Uniform distribution.
- First of all your sufficient estimator is wrong.
The density can be written in the following way
$$f_X(x|\theta)=e^{\theta-x-e^{\theta-x}}$$
This can be viewed in the following way
$$f_X(x|\theta)=e^{-x}e^{\theta-e^{\theta}e^{-x}}$$
This shows that $f_X(x|\theta)$ belows to the Exponential family thus
$$S=\Sigma_x e^{-x}$$
is Sufficient and Complete.
- MLE. Without doing any calculation, just at this point you know that the MLE is a function of the sufficient estimator (it's a property of MLE)
The likelihood is
$$L(\theta) \propto e^{n\theta-e^{\theta}\Sigma_xe^{-x}}$$
let's take the log
$$l(\theta)=n\theta-e^{\theta}\Sigma_xe^{-x}$$
let's derivating $l(\theta)$
$$l^*(\theta)=n-e^{\theta}\Sigma_xe^{-x}$$
Which immediately leads to
$$\hat{\theta}_{ML}=log\frac{n}{\Sigma_xe^{-x}}=log \frac{n}{S}$$
...a function of $S$, as already known.
Best Answer
You can absolutely use the Factorization theorem. In fact, you already have the factorization at the end of the first line of your MLE calculation: $$L(\theta \mid \boldsymbol x) = \theta^n \left(\prod_{i=1}^n (1-x_i)\right)^{\theta-1}$$ implies the choice $$h(\boldsymbol x) = 1, \quad T(\boldsymbol x) = \prod_{i=1}^n (1-x_i), \quad g(T \mid \theta) = \theta^n T^{\theta-1}.$$ Thus $T(\boldsymbol x)$ is sufficient for $\theta$, and the MLE $$\hat \theta = -\frac{n}{\log T},$$ which is a monotone function of $T$, is also sufficient for $\theta$.
I should note that to be fully correct, the likelihood function should actually be written $$L(\theta \mid \boldsymbol x) = \theta^n \left(\prod_{i=1}^n (1-x_i)\right)^{\theta-1} \prod_{i=1}^n \mathbb 1(0 < x_i < 1) = \theta^n \left(\prod_{i=1}^n (1-x_i)\right)^{\theta-1} \mathbb 1(0 < x_{(1)} \le x_{(n)} < 1),$$ hence the actual form of the factorization has $h(\boldsymbol x) = \mathbb 1(0 < x_{(1)} \le x_{(n)} < 1)$. This does not affect your MLE or proof of sufficiency, but because we require that the sample observations be contained in the interval $(0,1)$, the product of indicator functions and the subsequent minimum and maximum order statistics, are ancillary statistics for $\theta$.