They are asking for an approximate distribution for $\hat \theta$ as $n\to \infty.$
A classic example of an asymptotic distribution is the central limit theorem, which is an asymptotic distribution for the sample mean $\bar X.$ The central limit theorem says that provided $\mathrm{Var}(X)$ exists and is less than infinity, the sample mean is approximately normally distributed with the correct mean and a variance that goes down as $1/n$: $$\bar X \sim N\left( E(X),\frac{\mathrm{Var}(X)}{n}\right).$$
The mathematically rigorous statement this is code for is that $$ \frac{\sqrt{n}(\bar X-E(X))}{\sqrt{\mathrm{Var}(X)}} \rightarrow_D N(0,1)$$ where $\rightarrow_D$ denotes convergence in distribution.
Now your method of moments estimator is a nice function of your sample mean. It turns out that a nice function of an asymptotically normal variable is itself asymptotically normal: $$ g(\bar X)\sim N\left(g(E(X)), \frac{\mathrm{Var}(X)(g'(E(X))^2}{n}\right)$$ (for reasons I don't understand this is called the "delta method").
So it's just a matter of computing $E(X)$ and $\mathrm{Var}(X)$ for your distribution. Then for part $a$ you have $ \hat \theta = g(\bar X)$ where $$g(x)=\frac{1-2x}{x-1},$$ so you just need to compute the derivative and then plug everything into the formula.
For the MLE, it's in terms of the sample mean of $\log(X_i)$ but $\log(X_i)$ also follows the central limit theorem (you just need to compute its mean and variance) so you can apply the delta method to it too to get an asymptotic distribution for the MLE.
Under certain regularity conditions (like the ones mentioned here on page 1), maximum likelihood estimators have an asymptotic normal distribution. In particular, distributions which are members of the regular exponential family satisfy these conditions.
For $Y_i=\log X_i$, joint density of $Y_1,\ldots,Y_n$ is
\begin{align}
f_{\theta}(y_1,\ldots,y_n)&=\frac{1}{(\sqrt{2\theta\pi})^n}\exp\left[-\frac{1}{2\theta}\sum_{i=1}^n (y_i-\theta)^2\right]
\\&=\frac{1}{(\sqrt{2\theta\pi})^n}\exp\left[-\frac{1}{2\theta}\sum_{i=1}^n y_i^2+\sum_{i=1}^n y_i-\frac{n\theta}{2}\right]\quad,\small (y_1,\ldots,y_n)\in\mathbb R^n,\,\theta>0
\end{align}
This shows that $f_{\theta}$ is a member of a regular one-parameter exponential family. So we can say that the MLE $\hat\theta$ of $\theta$ has an asymptotic normal distribution, given by
$$\sqrt n(\hat\theta-\theta)\stackrel{L}\longrightarrow N\left(0,\frac{1}{I_{Y_1}(\theta)}\right)\,,$$
where $I_{Y_1}(\theta)=-E_{\theta}\left[\frac{\partial^2}{\partial\theta^2}\ln f_{\theta}(Y_1)\right]$ is the information contained in $Y_1$.
A routine calculation gives $I_{Y_1}(\theta)=\frac{2\theta+1}{2\theta^2}$, so that the limiting distribution is eventually
$$\sqrt n(\hat\theta-\theta)\stackrel{L}\longrightarrow N\left(0,\frac{2\theta^2}{2\theta+1}\right)$$
Best Answer
The likelihood function is $$ L(\theta)=L(\theta\mid X_1,\dots,X_n)=\prod_{i=1}^n\frac{\theta}{(1+X_i)^{\theta+1}},\quad \theta,X_1,\dots,X_n>0 $$ So $$ \log L(\theta)=n\log\theta-(\theta+1)\sum_{i=1}^n\log(1+X_i) $$ giving MLE $$ \hat\theta_n=\frac{n}{\sum_{i=1}^n\log(1+X_i)} $$ We have $Y_i=\log (1+X_i)$ follows an exponential distribution, so central limit theorem gives $\sqrt{n}(\bar{Y}-\mu_Y)$ is asymptotically $N(0,\sigma_Y^2)$ (I'll leave you to determine the $\mu_Y$ and $\sigma_Y^2$). Hence with $g(x)=1/x$, delta method tells you $\sqrt{n}(\hat\theta_n-\theta)$ is asymptotically $N(0,\sigma_Y^2\cdot[g'(g^{-1}(\theta))]^2)$.