The idea behind the Method of Moments estimator is the following.
Let suppose we have $\underline{X}$ $=$ $(X_{1},...,X_{n})$ iid observations, distributed according to $f(\cdot \mid \boldsymbol{\theta})$, $\boldsymbol{\theta}$ $\in$ $\Theta$ $\subseteq$ $\mathbb{R}^{d}$.
Define:
$$\mu_{k} = E(X_{i}^{k}) = \mu_{k}(\theta_{1},...,\theta_{d})$$
the $k$-th moment of $X_{i}$ and
$$m_{k} = \frac{1}{n}\sum_{i = 1}^{n}X_{i}^{k}$$
the $k$-th sample moment. Then we estimate the vector of parameters $\boldsymbol{\theta}$ = $(\theta_{1},...,\theta_{d})$ solving the following system of equations:
\begin{cases} m_{1} = \mu_{1}(\theta_{1},...,\theta_{d})\\
.\\
.\\
m_{d} = \mu_{d}(\theta_{1},...,\theta_{d})\end{cases}
leading to $(\hat{\theta_{1}},...,\hat{\theta_{d}})$ $=$ $\boldsymbol{\hat{\theta}}$, our method of moments estimator.
Now, consider a generic beta distribution:
$$f(x_{i} \mid \theta, \alpha) = \frac{\Gamma(\theta + \alpha)}{\Gamma(\theta)\Gamma(\alpha)}x_{i}^{\theta - 1}(1 - x_{i})^{\alpha - 1}I(0 < x_{i} < 1)$$
In this case, $\boldsymbol{\theta}$ $=$ $(\theta, \alpha)$, hence we need to consider the first two moments.
The first moment of $X_{i}$ is:
$$E(X_{i}) = \frac{\theta}{\theta + \alpha}$$
while the second moment can be proved to be:
$$E(X_{i}^{2}) = \frac{\theta(\theta + 1)}{(\theta + \alpha)(\theta + \alpha + 1)}$$
Now, at this point, for the first moment in the sample, we have:
$$m_{1} = \frac{1}{n}\sum_{i = 1}^{n}X_{i} = \overline{X_{n}}$$
while for the second moment in the sample we exploit the sample variance:
$$s^{2} = m_{2} - m_{1}^{2} \Rightarrow m_{2} = s^{2} + m_{1}^{2}$$
and, at this point, we apply the system of equations described before to this case, to be then solved for both $\theta$ and $\alpha$:
\begin{cases} m_{1} = \overline{X_{n}} = E(X_{i}) = \frac{\theta}{\theta + \alpha}\\
m_{2} = s^{2} + m_{1}^{2} = E(X_{i}^{2}) = \frac{\theta(\theta + 1)}{(\theta + \alpha)(\theta + \alpha + 1)}\end{cases}
which would yield the two Method of Moments estimators $\hat{\theta}$ and $\hat{\alpha}$, and this would be in the general case with both parameters of the beta unknown. In your case, the problem is simplified being $\alpha$ $=$ $1$ known, so that we have to estimate only $\theta$ using the first sample moment, that is we have only the first equation to solve with $\alpha$ $=$ $1$:
$$\overline{X_{n}} = \frac{\theta}{\theta + 1} \Rightarrow \hat{\theta} = \frac{\overline{X_{n}}}{1 - \overline{X_{n}}}$$
Hope it clarifies.
I attempted to compute the mean of the estimator here, (taking into account the observation in the comments in the original question regarding the sign of the estimator being reversed), from which you can determine that the bias is zero from the definition $Bias=E\left[\hat\theta\right]-\theta$.
I believe the rest of the answer can follow along these lines. I hope this helps.
$E\left[\hat{\theta}\right]=E \left[ \frac{1}{n}\Sigma_{i=1}^{n}x_{i}-1 \right]$
$E\left[\hat{\theta}\right]=\frac{1}{n}\Sigma_{i=1}^{n}E\left[x_{i}\right] -E\left[1 \right]$
$E\left[\hat{\theta}\right]=\frac{1}{n}\Sigma_{i=1}^{n}\int_{x_{i}=\theta}^{\infty}x_{i}e^{-(x_{i}-\theta)}d{x_{i}}-1$
$E\left[\hat{\theta}\right]=\frac{1}{n}\Sigma_{i=1}^{n}\left(-(x_{i}+1)e^{\theta-x_{i}}\right)|_{x_{i}=\theta}^{\infty}-1$
$E\left[\hat{\theta}\right]=\frac{1}{n}\Sigma_{i=1}^{n}\left(\theta+1\right)-1=\theta$
Best Answer
First apply law of total expectation and get
$$\mathbb{E}\left[\frac{X}{N} \right]=\mathbb{E}\left[\mathbb{E}\left[\frac{X}{N}\left|N=n \right. \right] \right]=\mathbb{E}\left[\frac{1}{n}\mathbb{E}[X] \right]=\mathbb{E}\left[\frac{1}{n}\cdot n\theta \right]=\theta$$
then using the definition
$$\mathbb{V}[X]=\mathbb{E}[X^2]-\mathbb{E}^2[X]$$
you can find your variance
$$\mathbb{E}\left[\left(\frac{X}{N} \right)^2 \right]=\mathbb{E}\left[\mathbb{E}\left(\frac{X}{N} \right)^2 |N=n \right]=$$
$$=\mathbb{E}\left[\frac{1}{n^2}\mathbb{E}[X^2] \right]=\mathbb{E}\left[\frac{\theta(1-\theta)}{n}+\theta^2 \right]=\mathbb{E}\left[\frac{1}{N} \right]\theta(1-\theta)+\theta^2$$
which is
$$\mathbb{V}\left[\frac{X}{N} \right]=\mathbb{E}\left[\frac{1}{N} \right]\theta(1-\theta)+\theta^2-\theta^2=\mathbb{E}\left[\frac{1}{N} \right]\theta(1-\theta)$$
...as requested