This might work :
Define $$I(\theta)=\begin{cases}1&,\text{ if }\theta=\theta_1 \\ 0&,\text{ if }\theta=\theta_2\end{cases}$$
Then pdf of $X\sim F_{\theta}$ can be written as
\begin{align}
G_{\theta}(x)&=[f_{\theta_1}(x)]^{I(\theta)}\,[f_{\theta_2}(x)]^{1-I(\theta)}
\\\\&=\underbrace{\left[\frac{f_{\theta_1}(x)}{f_{\theta_2}(x)}\right]^{I(\theta)}}_{g(\theta,T(x))}\,\underbrace{f_{\theta_2}(x)}_{h(x)}\qquad,\,\,\theta\in\{\theta_1,\theta_2\}
\end{align}
By Factorisation theorem, a sufficient statistic for $F_{\theta}$ is $$T(X)=\frac{f_{\theta_1}(X)}{f_{\theta_2}(X)}$$
First observe that $\frac{X}{\sqrt{\theta}}\sim N(0;1)$ and thus
$$ \bbox[5px,border:2px solid black]
{
\frac{X^2}{\theta}\sim \chi_{(1)}^2=Gamma\Big(\frac{1}{2};\frac{1}{2}\Big)
\qquad (1)
}
$$
Now it is evident that
$$ \bbox[5px,border:2px solid black]
{
T=\frac{1}{\theta}\sum_{i=1}^{n}X_i^2\sim Gamma\Big(\frac{n}{2};\frac{1}{2}\Big)
\qquad (2)
}
$$
Concluding, $Y=\theta T$,
thus
$$ \bbox[5px,border:2px solid black]
{
Y\sim Gamma\Big(\frac{n}{2};\frac{1}{2\theta}\Big)
\qquad (3)
}
$$
To prove (1) and (3) use the fundamental transformation theorem (change of variable)
To prove (2) use MGF's properties
Some hints for the proofs:
For (1), use the change of variable
$Z=\frac{X^2}{\theta}$ then
$$F_Z(z)=\mathbb{P}[Z\leq z]=\mathbb{P}[X^2\leq z\theta]=\mathbb{P}[-\sqrt{z\theta}\leq X \leq \sqrt{z\theta}]=F_X(\sqrt{z\theta})-F_X(-\sqrt{z\theta})$$
derivate it and get you first PDF:
$$f_Z(z)=\frac{1}{\sqrt{2\pi}}z^{-\frac{1}{2}}e^{-\frac{z}{2}}$$
This can be rewritten in the following way:
$$f_Z(z)=\frac{\Big(\frac{1}{2}\Big)^{\frac{1}{2}}}{\Gamma\Big(\frac{1}{2}\Big)}z^{\frac{1}{2}-1}e^{-\frac{z}{2}}$$
...and the first step is done! $f(z)$ is evidently a $Gamma\Big(\frac{1}{2};\frac{1}{2}\Big)$
For step (2) very easily multiply the n identical MGF's
For step (3) same procedure as (1): change of variable.
Best Answer
Your first issue is that you need to perform the factorization on the joint density of the sample. What you did instead was look at the density of a single observation.
So, your joint density of the sample $\boldsymbol x = (x_1, \ldots, x_n)$ comprising independent and identically distributed observations is simply the product of the densities of each observation: $$f(\boldsymbol x \mid \theta) = \prod_{i=1}^n f(x_i \mid \theta) = \prod_{i=1}^n (\theta+1)\theta x_i (1 - x_i)^{\theta - 1} \mathbb 1 (0 < x_i < 1).$$ Here I have used a slightly different notation for the indicator function, but its meaning should be clear. Now, our goal is to write this in the form $$f(\boldsymbol x \mid \theta) = h(\boldsymbol x) g(\boldsymbol T(\boldsymbol x) \mid \theta),$$ where $\boldsymbol T : \mathbb R^n \to \mathbb R^m$ is some function of the sample that satisfies the following:
Further, we require that $g$ depend on $\boldsymbol x$ only through $T$; that is to say, if we were given only $T(\boldsymbol x)$ instead of $\boldsymbol x$ itself, we could still compute $g$.
So, with this in mind, we write $$\begin{align*} f(\boldsymbol x \mid \theta) &= (\theta+1)^n \theta^n \prod_{i=1}^n x_i \biggl(\prod_{i=1}^n (1 - x_i)\biggr)^{\theta-1} \prod_{i=1}^n \mathbb 1 (0 < x_i < 1) \\ &= \underbrace{\biggl( \prod_{i=1}^n x_i \biggr) \mathbb 1 (0 < x_{(1)} \le x_{(n)} < 1)}_{h(\boldsymbol x)} \underbrace{(\theta+1)^n \theta^n \biggl(\underbrace{\prod_{i=1}^n (1 - x_i)}_{T(\boldsymbol x)}\biggr)^{\theta-1}}_{g(T(\boldsymbol x) \mid \theta)}. \end{align*}$$ A few important observations: first, $h$ contains the indicator function as well as the product $\prod_{i=1}^n x_i$. You cannot ignore the indicator, because the joint density has finite support on $(0,1)^n$.
Second, the choice of $T$ is not unique: you could also have chosen, for example, $$T(\boldsymbol x) = \frac{1}{n} \log \prod_{i=1}^n (1 - x_i) = \frac{1}{n} \sum_{i=1}^n \log (1 - x_i),$$ which is the mean of a suitably log-transformed sample. This is because the logarithm is one-to-one on the support of $X$.
Third, $h$ and $g$ could also have been chosen differently--these are not necessarily unique. What matters is that we must make the choice such that $g$ can be computed if all we know is $\theta$ and $T(\boldsymbol x)$.