Refer to the lecture notes here on page 5.
Joint density of the sample $ X=(X_1,X_2,\ldots,X_n)$ for $\theta\in\mathbb R$ is as you say $$f_{\theta}( x)=\mathbf1_{\theta<x_{(1)},x_{(n)}<\theta+1}=\mathbf1_{x_{(n)}-1<\theta<x_{(1)}}\quad,\,x=(x_1,\ldots,x_n)$$
where $x_{(1)}=\min_{1\le i\le n}x_i$ and $x_{(n)}=\max_{\le i\le n}x_i$.
It is clear that $T(x)=(x_{(1)},x_{(n)})$ is sufficient for $\theta$ by the Factorization theorem.
Define $A_x=(x_{(n)}-1,x_{(1)})$ and $A_y=(y_{(n)}-1,y_{(1)})$.
Then for some $y=(y_1,\ldots,y_n)$, observe that the ratio $f_{\theta}(x)/f_{\theta}(y)$ takes the simple form
$$\frac{f_{\theta}(x)}{f_{\theta}(y)}=\frac{\mathbf1_{\theta\in A_x}}{\mathbf1_{\theta\in A_y}}=\begin{cases}0&,\text{ if }\theta\notin A_x,\theta\in A_y \\ 1&,\text{ if }\theta\in A_x,\theta\in A_y \\ \infty &,\text{ if }\theta\in A_x,\theta\notin A_y\end{cases}$$
Clearly this is independent of $\theta$ if and only if $A_x=A_y$, that is iff $T(x)=T(y)$, which proves $T$ is indeed minimal sufficient.
Another proof using the definition of minimal sufficiency is given on page 3 of the linked notes.
As this example shows, there is no such rule of thumb in general for ascertaining minimal sufficiency of a statistic simply by comparing the dimensions of the statistic and that of the parameter.
Using factorization theorem the sufficient statistic for $\theta$ is $y=\prod_i X_i$. This because the function $g(\theta,t(\mathbf{x}))$ depends on the data only through the statistic "t=product".
The function $\frac{1}{\prod_{i}X_{i}}$ you wrongly identfied as the sufficient statistic is the function of "x alone".
Then the posterior is the following (hint: when calculating the posterior discard any quantity that does not depend on $\theta$)
$$\pi(\theta|y) \propto e^{-\beta \theta}\theta^n y^{\theta-1}$$
$$\propto e^{-\beta \theta}\theta^n e^{(\theta-1) log y}$$
$$\propto \theta^ne^{-(\beta-logy)\theta}$$
...we immediately recognize in this posterior a Gamma distribution...
now you can kill the problem by yourself without solving the integral analitically
Best Answer
It seems you are quite confused. The Fisher-Neymann factorization theorem says that if the density f can be factored into nonnegative functions g, h such that $f(x)=h(x)g(T(x),\theta)$, then T(x) is a sufficient statistic for theta. Usually people say factorization theorem for short.
By contrast, the definition of sufficiency is that T(x) is a sufficient statistic for theta if f(x|T(x)) does not depend on theta.