Refer to the lecture notes here on page 5.
Joint density of the sample $ X=(X_1,X_2,\ldots,X_n)$ for $\theta\in\mathbb R$ is as you say $$f_{\theta}( x)=\mathbf1_{\theta<x_{(1)},x_{(n)}<\theta+1}=\mathbf1_{x_{(n)}-1<\theta<x_{(1)}}\quad,\,x=(x_1,\ldots,x_n)$$
where $x_{(1)}=\min_{1\le i\le n}x_i$ and $x_{(n)}=\max_{\le i\le n}x_i$.
It is clear that $T(x)=(x_{(1)},x_{(n)})$ is sufficient for $\theta$ by the Factorization theorem.
Define $A_x=(x_{(n)}-1,x_{(1)})$ and $A_y=(y_{(n)}-1,y_{(1)})$.
Then for some $y=(y_1,\ldots,y_n)$, observe that the ratio $f_{\theta}(x)/f_{\theta}(y)$ takes the simple form
$$\frac{f_{\theta}(x)}{f_{\theta}(y)}=\frac{\mathbf1_{\theta\in A_x}}{\mathbf1_{\theta\in A_y}}=\begin{cases}0&,\text{ if }\theta\notin A_x,\theta\in A_y \\ 1&,\text{ if }\theta\in A_x,\theta\in A_y \\ \infty &,\text{ if }\theta\in A_x,\theta\notin A_y\end{cases}$$
Clearly this is independent of $\theta$ if and only if $A_x=A_y$, that is iff $T(x)=T(y)$, which proves $T$ is indeed minimal sufficient.
Another proof using the definition of minimal sufficiency is given on page 3 of the linked notes.
As this example shows, there is no such rule of thumb in general for ascertaining minimal sufficiency of a statistic simply by comparing the dimensions of the statistic and that of the parameter.
$T(\vec{X})$ is sufficient statistic if
$$
F (\vec{x}|T(\vec{X})=t, \theta) = F (\vec{x}|T(\vec{X})=t)
$$
By definition
$$
\begin{aligned}
F(\vec{x}|T(\vec{X})=(t_1, ..., t_{n-1}),\ \theta)
&= \mathbb{P}(X_1 < x_1, ..., X_n < x_n |X_1 = t_1, ..., X_{n-1} = t_{n-1},\ \theta)\\
&=
\begin{cases}
0, &\exists i:t_i > x_i\\
\mathbb{P}(X_n < x_n |\ \theta) &\text{if } \forall i: t_i < x_i
\end{cases}\\
&=
\begin{cases}
0, &\exists i:t_i > x_i\\
F(x_n|\theta) &\text{if } \forall i: t_i < x_i
\end{cases}
\end{aligned}
$$
If $(X_1, ..., X_n)$ are independent, then $T(X_1, ..., X_n)$ is not sufficient in any case.
Best Answer
where $\theta > 0, \theta \neq 1$ is the unknown shape parameter, cannot be factorized w.r.t $\theta$.