(a) is wrong. You found with factorization theorem that $T=X^2$ is sufficient for $\sigma^2$. $X$ is not sufficient because it is NOT a monotonic function of $T$
(b)
$$T_1=|X|=\sqrt{X^2}$$
In this case, $|X|$ is a monotonic function of $T$ thus $T_1$ is sufficient too for $\sigma^2$
(c) this is wrong too...
Easy find that $\hat{\theta}=X^2$ (observe that MLE must be function of the sufficient estimator, if it exists)
Now using invariance property
$$\hat{\sigma}_{ML}=\sqrt{\hat{\theta}}=|X|$$
Your formula
$$f(x_1, \ldots, x_n \mid \theta) = \sum_{n = 1}^\infty p_n \theta^{\sum_{i = 1}^n x_i}(1-\theta)^{n - \sum_{i = 1}^n x_i}$$
is problematic in two ways.
First, you use $n$ twice in two different contexts: you use it to denote the sample size on the left-hand side $x_1, x_2, \ldots, x_n$, but then you also use it as the index of summation on the right-hand side, where $n \in \{1, 2, 3, \ldots\}.$ This is clearly wrong.
Second, this expression reveals a misconception on your part, regarding the nature of the parametric model and what the sample looks like. As the question is posed, there is but a single binomial random variable $X$. Your joint PMF implies there are some arbitrary number of independent binomial random variables, possibly with different sample sizes.
To specify the model concretely, it is this: $$N \sim \operatorname{Categorical}(p_1, p_2, \ldots ), \\
\Pr[N = n] = p_n, \quad \sum_{n=1}^\infty p_n = 1. \\ B_i \sim \operatorname{Bernoulli}(\theta), \\ \Pr[B_i = 1] = \theta, \quad 0 < \theta < 1. \\ X = \sum_{i=1}^N B_i, \\ X \mid N \sim \operatorname{Binomial}(N,\theta), \\ \Pr[X = x \mid N = n] = \binom{n}{x} \theta^x (1 - \theta)^{n-x}, \quad x \in \{0, 1, 2, \ldots \}.$$
The sample in this case is not some set of IID binomial variables, but rather, the set of $\{B_i\}_{i=1}^N$ of Bernoulli variables, from which a single realization $X$ is obtained, as the question clearly states.
How does this affect the subsequent computation? Well, what the "official" solution doesn't explicitly state, but is helpful to keep in mind, is that there is no loss of information about $\theta$ when we take the sum $X$ rather than the sample $(B_1, \ldots, B_N)$; that is to say, $X$ is sufficient for $\theta$ when $N$ is fixed. When we write the joint PMF for $(X, N)$ in factored form, we do not have access to the $B_i$ but we don't need to, because realization of $N$ implies that $X$ has not discarded information about $\theta$. This is what lets us assume the sample comprises the single ordered pair $(X, N)$, rather than $(B_1, \ldots, B_N)$: some data reduction has already taken place.
However, we can also write and factor
$$\begin{align}
f(\boldsymbol b , n \mid \theta)
&= \left(\prod_{i=1}^n \theta^{b_i} (1 - \theta)^{1 - b_i} \mathbb 1 (b_i \in \{0, 1\})\right) p_n \mathbb 1 (n \in \mathbb Z^+) \\
&= \theta^x (1-\theta)^{n-x} p_n \mathbb 1 (x \in \{0, \ldots, n\}) \mathbb 1 (n \in \mathbb Z^+) \\
\end{align}$$
where $$x = \sum_{i=1}^n \mathbb 1 (b_i = 1),$$ hence the choice $$h(\boldsymbol b, n) = \mathbb 1 (x \in \{0, \ldots, n\}) \mathbb 1 (n \in \mathbb Z^+), \\
T(\boldsymbol b, n) = (t_1(\boldsymbol b), t_2(\boldsymbol b)) = (x,n), \\
g((t_1, t_2) \mid \theta) = \theta^{t_1} (1 - \theta)^{t_2 - t_1}.
$$
Thus $(x,n)$ is sufficient for $\theta$ (remember, $T$ is a function of the $b_i$ and $n$, since $x$ is a function of these). The conclusion for sufficiency is the same whether we use a binomial or Bernoulli model. Having established this, for minimal sufficiency, it is best to work from $X$ directly, which is what is done in the official solution.
Best Answer
$|X|$ is indeed sufficient for $\sigma^2$. This is quite intuitive, since $\text{Var}(X) = \text{Var}(-X)$, so the sign of your observation shouldn't matter in this context. You can't conclude that $\frac{1}{\sqrt{2 \pi} \sigma}e^{-\frac{|x|^2}{2\sigma^2}} =: \phi(|x|)$ is the probability density function of $|X|$.
Since $|X|$ only takes non-negative values and $\phi(x)$ is an even function, it is quite intuitive to say that the pdf of $|X|$ is $2\phi(x)$ for $x \geq0$. If you are not convinced, here is the calculation:
1) ($x < 0)$
\begin{equation} P(|X| \leq x) = 0, \end{equation} since $|X|$ takes only non-negative values.
2) $(x \geq 0)$ \begin{align} P(|X| \leq x) &= P(-x \leq X \leq x) \\ &= P(X \leq x) -P(X \leq -x) ~\text{(because of symmetry)} \\ &= P(X \leq x) -P(X \geq x) \\ &= P(X \leq x) -(1 - P(X \leq x)) \\ &= 2P(X \leq x) - 1. \end{align} If we differentiate wrt $x$, we obtain the density $2\phi(x)$.
With this, we can conclude that $|X|$ has the density function \begin{equation} f(x) = \begin{cases} 0, & \text{if}\ x < 0,\\ 2\phi(x), & \text{if}\ x\geq0. \end{cases} \end{equation}