Now I hate to be the one to answer my own question, but I feel that in the time it took me to formulate my question in MathJax, I might have arrived at the answer.
First, let's look at why the reduction of degree from two-dimensions to one-dimension for a (joint) sufficient statistic vector for $\theta$ of the Uniform distribution works for symmetrical arguments:
Suppose $X_1,X_2,...,X_n$ is a random sample from the symmetric Uniform distribution $Unif(-\theta,\theta)$. By the factorization theorem, it is easy to verify that the vector $\mathbf Y = (Y_1,Y_2)$ where $Y_1 = X_\left(1\right)$ and $Y_2=X_\left(n\right)$ is a joint sufficient vector of degree two for $\theta$, with $$K_1(Y_1,Y_2;\theta)=(\frac{1}{2})^n \cdot \mathbf 1_{(-\theta,\theta)}(Y_1) \cdot \mathbf 1_{(-\theta,\theta)}(Y_n)$$
From the two indicator functions and from the definition of order statistics, we have that $$-\theta<Y_1<Y_n<\theta \implies \theta>-Y_1 \land \theta>Y_n$$
This allows us to use the maximum function concurrently on $-Y_1$ and $Y_n$ to put a restriction on $\theta$, meaning that this result, $Y^* = max\{-Y_1,Y_n\}$, is such that $$\mathbf 1_{(-\theta,\theta)}(Y_1) \cdot \mathbf 1_{(-\theta,\theta)}(Y_n) = \mathbf 1_{(-\theta,\theta)}(Y^*)$$ is a valid equality.
On the other hand, suppose $X_1,X_2,...,X_n$ is a random sample from the Uniform distribution $Unif(\theta-1,\theta+1)$. By the factorization theorem, it is easy to verify that the vector $\mathbf Y = (Y_1,Y_2)$ where $Y_1 = X_\left(1\right)$ and $Y_2=X_\left(n\right)$ is a joint sufficient vector of degree two for $\theta$, with $$K_1(Y_1,Y_2;\theta)=(\frac{1}{2})^n \cdot \mathbf 1_{(\theta-1,\theta+1)}(Y_1) \cdot \mathbf 1_{(\theta-1,\theta+1)}(Y_n)$$
From the two indicator functions and from the definition of order statistics, we have that $$\theta-1<Y_1<Y_n<\theta+1 \implies Y_1+1>\theta \land Y_n-1<\theta$$
Because now we have $\theta$ sandwiched between two restrictions ("variables", for our purposes) and without the benefit of appealing to the symmetry of the situation, we have no tools available to ourselves to condense the information provided from $Y_1$ and $Y_2$ any further. Thus, we must concede that the joint sufficient statistics $Y_1$ and $Y_n$ are joint minimal sufficient statistics for $\theta$ for a non-symmetric Uniform distribution. On the other hand, we have also shown that $Y^*=max\{Y_1,Y_n\}$ is the single-dimensional and (thus) minimal sufficient sufficient statistic for $\theta$ for a symmetric Uniform distribution.
It's more or less correct, but some very minor issues...
- to verify minimality, your ratio is
$$\frac{e^{-\alpha \sum_ix_i}\times\mathbb{1}_{(0;x_{(1)})}(\theta)}{e^{-\alpha \sum_iy_i}\times\mathbb{1}_{(0;y_{(1)})}(\theta)}$$
This ratio is independent by $(\alpha;\theta)$ iff
$$
\begin{cases}
\sum_ix_i=\sum_iy_i \\
x_{(1)}=y_{(1)}
\end{cases}$$
- Considering the profile likelihood,
$$\hat{\theta}=x_{(1)}$$
Then, substituting $\hat{\theta}$ with $\theta$ you get
$$\hat{\alpha}=\frac{n}{\sum_i[x_i-x_{(1)}]}$$
Thus
$$(\hat{\alpha};\hat{\theta})=(\frac{n}{\sum_i[x_i-x_{(1)}]};x_{(1)})$$
Best Answer
Your argument for the maximum likelihood estimator is fine, since the likelihood is $$e^{k\theta -\sum_i y_i} \mathbf{1}_{\theta \le \min_i y_i}.$$
As I mentioned in a comment, your MLE $\min_i y_i$ (actually, any estimator) should be a function of any sufficient statistic (so, contrary to your comment, the MLE and sufficient statistics are definitely related). This is a fundamental property of sufficient statistics. If you don't believe me, see this excerpt from Wikipedia:
Since $\min_i y_i$ is not a function of $\sum_i y_i$, we see that $\sum_i y_i$ is not a sufficient statistic. This is a good lesson to always encode the support of densities with an indicator function (as I have above) before doing further operations like maximizing the likelihood or applying the Fisher-Neyman factorization theorem. With the indicator function, you can see that the factorization $$e^{k\theta} \mathbf{1}_{\theta \le \min_i y_i} \cdot e^{-\sum_i y_i} = g(\min_i y_i, \theta) \cdot h(y_1, \ldots, y_n)$$ shows that $\min_i y_i$ is a sufficient statistic.