Refer to the lecture notes here on page 5.
Joint density of the sample $ X=(X_1,X_2,\ldots,X_n)$ for $\theta\in\mathbb R$ is as you say $$f_{\theta}( x)=\mathbf1_{\theta<x_{(1)},x_{(n)}<\theta+1}=\mathbf1_{x_{(n)}-1<\theta<x_{(1)}}\quad,\,x=(x_1,\ldots,x_n)$$
where $x_{(1)}=\min_{1\le i\le n}x_i$ and $x_{(n)}=\max_{\le i\le n}x_i$.
It is clear that $T(x)=(x_{(1)},x_{(n)})$ is sufficient for $\theta$ by the Factorization theorem.
Define $A_x=(x_{(n)}-1,x_{(1)})$ and $A_y=(y_{(n)}-1,y_{(1)})$.
Then for some $y=(y_1,\ldots,y_n)$, observe that the ratio $f_{\theta}(x)/f_{\theta}(y)$ takes the simple form
$$\frac{f_{\theta}(x)}{f_{\theta}(y)}=\frac{\mathbf1_{\theta\in A_x}}{\mathbf1_{\theta\in A_y}}=\begin{cases}0&,\text{ if }\theta\notin A_x,\theta\in A_y \\ 1&,\text{ if }\theta\in A_x,\theta\in A_y \\ \infty &,\text{ if }\theta\in A_x,\theta\notin A_y\end{cases}$$
Clearly this is independent of $\theta$ if and only if $A_x=A_y$, that is iff $T(x)=T(y)$, which proves $T$ is indeed minimal sufficient.
Another proof using the definition of minimal sufficiency is given on page 3 of the linked notes.
As this example shows, there is no such rule of thumb in general for ascertaining minimal sufficiency of a statistic simply by comparing the dimensions of the statistic and that of the parameter.
Now I hate to be the one to answer my own question, but I feel that in the time it took me to formulate my question in MathJax, I might have arrived at the answer.
First, let's look at why the reduction of degree from two-dimensions to one-dimension for a (joint) sufficient statistic vector for $\theta$ of the Uniform distribution works for symmetrical arguments:
Suppose $X_1,X_2,...,X_n$ is a random sample from the symmetric Uniform distribution $Unif(-\theta,\theta)$. By the factorization theorem, it is easy to verify that the vector $\mathbf Y = (Y_1,Y_2)$ where $Y_1 = X_\left(1\right)$ and $Y_2=X_\left(n\right)$ is a joint sufficient vector of degree two for $\theta$, with $$K_1(Y_1,Y_2;\theta)=(\frac{1}{2})^n \cdot \mathbf 1_{(-\theta,\theta)}(Y_1) \cdot \mathbf 1_{(-\theta,\theta)}(Y_n)$$
From the two indicator functions and from the definition of order statistics, we have that $$-\theta<Y_1<Y_n<\theta \implies \theta>-Y_1 \land \theta>Y_n$$
This allows us to use the maximum function concurrently on $-Y_1$ and $Y_n$ to put a restriction on $\theta$, meaning that this result, $Y^* = max\{-Y_1,Y_n\}$, is such that $$\mathbf 1_{(-\theta,\theta)}(Y_1) \cdot \mathbf 1_{(-\theta,\theta)}(Y_n) = \mathbf 1_{(-\theta,\theta)}(Y^*)$$ is a valid equality.
On the other hand, suppose $X_1,X_2,...,X_n$ is a random sample from the Uniform distribution $Unif(\theta-1,\theta+1)$. By the factorization theorem, it is easy to verify that the vector $\mathbf Y = (Y_1,Y_2)$ where $Y_1 = X_\left(1\right)$ and $Y_2=X_\left(n\right)$ is a joint sufficient vector of degree two for $\theta$, with $$K_1(Y_1,Y_2;\theta)=(\frac{1}{2})^n \cdot \mathbf 1_{(\theta-1,\theta+1)}(Y_1) \cdot \mathbf 1_{(\theta-1,\theta+1)}(Y_n)$$
From the two indicator functions and from the definition of order statistics, we have that $$\theta-1<Y_1<Y_n<\theta+1 \implies Y_1+1>\theta \land Y_n-1<\theta$$
Because now we have $\theta$ sandwiched between two restrictions ("variables", for our purposes) and without the benefit of appealing to the symmetry of the situation, we have no tools available to ourselves to condense the information provided from $Y_1$ and $Y_2$ any further. Thus, we must concede that the joint sufficient statistics $Y_1$ and $Y_n$ are joint minimal sufficient statistics for $\theta$ for a non-symmetric Uniform distribution. On the other hand, we have also shown that $Y^*=max\{Y_1,Y_n\}$ is the single-dimensional and (thus) minimal sufficient sufficient statistic for $\theta$ for a symmetric Uniform distribution.
Best Answer
If $\max\{-X_{(1)}, X_{(n)}\}$ is sufficient, then necessarily $(X_{(1)},X_{(n)})$ is sufficient, since if you know the latter you can easily compute the former. And the latter, the pair, cannot be a minimal sufficient statistic if the former is sufficient because if you know the maximum you don't have enough information to find the pair, and the pair is sufficient.
Being sufficient does not mean it gives enough information to describe the data; rather it means it gives all information in the data that is relevant to inference about $\theta$, given that the proposed model is right. The model is that the observations come from a uniform distribution on an interval symmetric about $0.$ But the data may also contain information calling that model into question and the sufficient statistic doesn't give that information.
By definition, that the maximum is sufficient means that the conditional distribution of the data given the maximum does not depend on $\theta.$
You are trying to show that $\dfrac{\mathbb{1}_{[\max\{-X_{(1)},X_{(n)}\}<\theta]}}{\mathbb{1}_{[\max\{-Y_{(1)},Y_{(n)}\}<\theta]}} \vphantom{\dfrac 1 {\displaystyle\sum}}$ does not depend on $\theta$ when the two maxima are equal. I think in cases like this, where $0/0$ can appear, one should phrase the result as saying that if the two maxima are equal then there is some number $c\ne0$ such that $$ \mathbb{1}_{[\max\{-X_{(1)},X_{(n)}\}<\theta]} = c \mathbb{1}_{[\max\{-Y_{(1)},Y_{(n)}\}<\theta]} $$ and that this equality continues to hold as $\theta$ changes within the parameter space $(0,\infty).$