I'll sketch out an approach but I'll leave the details up to you.
You can show that $\log(1 + X) \sim \Gamma(1, \theta^{-1})$. Use this to find the distribution of your sufficient statistic.
Then you need to suppose that $E(g(T)) = 0$ for an arbitrary function $g$, i.e.
$$
\int \limits_0^\infty g(t) f_T(t)dt = 0
$$
where I'm ignoring parameters. You'll need to fill those in appropriately.
Use this to make a statement about $g$ so that you can conclude that $P(g(T) = 0) = 1$ for all $\theta \in \Theta$.
Edit:
Because you know the exponential family result, I'll go through this proof in more detail.
Let $Y = \log(1+X)$. Note that this is 1-1. The inverse of the transformation is $X = \exp(Y) - 1$ so the Jacobian is $e^Y$. Putting this together we have
$$
f_Y(y|\theta) = f_X(e^y - 1|\theta) \times e^y
$$
$$
= \frac{\theta e^y}{(1 + e^y - 1)^{\theta + 1}} \times I(0 < y < \infty)
$$
$$
= \theta e^{-\theta y} =_d \Gamma(1, \theta^{-1})
$$
where I'm dropping the indicator because the support is independent of $\theta$ so it's not too important.
This means that $T \sim \Gamma(n, \theta^{-1})$ and therefore
$$
E(g(T)) = \int \limits_0^\infty g(t) t^n e^{-\theta t} \frac{\theta^n}{\Gamma(n)} dt =_{set} 0
$$
$$
\implies \int \limits_0^\infty g(t) t^n e^{-\theta t} dt = 0.
$$
Because $\forall t > 0$ and $\theta > 0$ $t^n e^{-\theta t} > 0$, it must be that $g(t) = 0$ almost surely (you could do this a lot more rigorously).
Answer already hinted at in comments...
A complete sufficient statistic does exist for this family of distributions, a Gumbel density with scale parameter unity. The set of order statistics is not minimal sufficient here. We should consider order statistics, which are trivially sufficient for any family of distributions, when we cannot find any other (non-trivial) sufficient statistic, say for the Laplace distribution and the Cauchy distribution with unknown location parameter.
Due to independence, joint density of $(X_1,X_2,\cdots,X_n)$ is
\begin{align}
f_{\theta}(x_1,x_2,\cdots,x_n)&=\prod_{i=1}^nf(x_i\mid\theta)
\\&=e^{-\sum_{i=1}^n x_i+n\theta}\exp\left(-\sum_{i=1}^n e^{-(x_i-\theta)}\right)
\\&=\exp\left(-e^{\theta}\sum_{i=1}^n e^{-x_i}+n\theta\right)e^{-\sum_{i=1}^nx_i}
\\&=g(\theta,t(\mathbf x))\,h(\mathbf x)\qquad,\text{ for all } \mathbf x=(x_1,\cdots,x_n)\in\mathbb R^n\,,\,\theta\in\mathbb R
\end{align}
where $g(\theta,t(\mathbf x))=\exp\left(-e^{\theta}\sum_{i=1}^n e^{-x_i}+n\theta\right)$ depends on $\theta$ and on $x_1,x_2,\cdots,x_n$ through $t(\mathbf x)=\sum_{i=1}^n e^{-x_i}$ and $h(\mathbf x)=e^{-\sum_{i=1}^nx_i}$ is independent of $\theta$.
So by Factorisation theorem, a sufficient statistic for $\theta$ is $$T(\mathbf X)=\sum_{i=1}^n e^{-X_i}$$
Let $Y=e^{-X}$ where $X$ has the density $f(x\mid\theta)$.
Then density of $Y$ is
\begin{align}f_Y(y)&=f(-\ln y\mid\theta)\left|\frac{dx}{dy}\right|\mathbf1_{y>0}
\\&=\frac{1}{y}e^{\ln y+\theta}\exp\left(-e^{\ln y+\theta}\right)\mathbf1_{y>0}
\\&=e^{\theta}\exp(-ye^{\theta})\mathbf1_{y>0}
\end{align}
That is, $Y\sim\text{Exp}(\lambda)$ with $\lambda=e^{\theta}(>0)$.
In other words, $e^{-X_i}\stackrel{\text{i.i.d}}\sim\text{Exp}(\lambda)$ for each $i=1,2,\cdots,n$, which implies
$$T\sim\text{Gamma}(\lambda,n)$$
To show that $T$ is indeed a complete statistic, we have to show that for any function $\psi$ of $T$, $$E_{\theta}(\psi(T))=0\implies \psi(T)=0\quad,\text{ a.e.}$$
Now,
\begin{align}E_{\theta}(\psi(T))&=0
\\\implies \int_0^\infty \psi(t)e^{-\lambda t}t^{n-1}\,dt &=0
\end{align}
It can be shown using the unicity property of Mellin transform that the last implication means $\psi(t)$ itself is identically zero. There might be an easier proof for completeness but I could not find one.
Best Answer
Lemma The minimal sufficient statistic $\left(X_{(1)},\sum_{i=2}^n \{X_{(i)}-X_{(1)}\}\right)$ is not complete.
Proof. The joint distribution of $$\left(X_{(1)},\sum_{i=2}^n \{X_{(i)}-X_{(1)}\}\right)$$ is the product of an Exponential $\mathcal E(n/\theta^2)$ translated by $\theta$ and of a $\mathcal Ga(n-1,1/\theta^2)$ [the proof follows from Sukhatme's Theorem, 1937, recalled in Devroye's simulation bible (1986, p.211)]. This means that $X_{(1)}$ can be represented as $$X_{(1)}=\frac{\theta^2}{n}\varepsilon+\theta\qquad\varepsilon\sim\mathcal E(1)$$ that $Y$ is scaled by $\theta^2$ since $$Y=\sum_{i=2}^n \{X_{(i)}-X_{(1)}\}=\theta^2 \eta\qquad\eta\sim\mathcal Ga(n-1,1)$$ and that $$\mathbb E_\theta\left[ Y^\frac{1}{2}\right]=\theta \frac{\Gamma(n-1/2)}{\Gamma(n-1)}$$ Therefore, $$\mathbb E_\theta\left[X_{(1)}-\frac{\Gamma(n-1)}{\Gamma(n-1/2)}Y^\frac{1}{2}\right]=\frac{\theta^2}{n}$$ eliminates the location part in $X_{(1)}$ and suggests dividing by $Y$ to remove the scale part: since $$\mathbb E_\theta\left[ Y^\frac{-1}{2}\right]=\theta^{-1} \frac{\Gamma(n-3/2)}{\Gamma(n-1)}\qquad \mathbb E_\theta\left[ Y^{-1}\right]=\theta^{-2} \frac{\Gamma(n-2)}{\Gamma(n-1)}$$ we have (for an arbitrary $\gamma)$ that $$\mathbb E_\theta\left[\frac{X_{(1)}-\gamma Y^\frac{1}{2}}{Y}\right]=\frac{\Gamma(n-2)}{n\Gamma(n-1)}+\frac{\theta^{-1}\Gamma(n-2)}{\Gamma(n-1)}- \frac{\gamma \theta^{-1}\Gamma(n-3/2)}{\Gamma(n-1)} $$ Setting $$\gamma=\frac{\Gamma(n-2)}{\Gamma(n-3/2)}$$ leads to $$\mathbb E_\theta\left[\frac{X_{(1)}-\gamma Y^\frac{1}{2}}{Y}\right]=\frac{\Gamma(n-2)}{n\Gamma(n-1)}$$ which is constant in $\theta$. Therefore this concludes the proof.
As pointed out by Sextus Empiricus, this is not the only transform of the sufficient statistic with constant expectation. His proposal $$\mathbb E_\theta\left[ X - \frac{1}{n(n-1)}Y- \frac{\Gamma(n-1)}{\Gamma(n-1/2)}Y^{1/2}\right] = 0$$is an alternative.