1.
$p(x) = {x_i-1 \choose k-1}\pi^k(1-\pi)^{x_i-k}$
$L(\pi;x_i) = \prod_{i=1}^{n}{x_i-1 \choose k-1}\pi^k(1-\pi)^{x_i-k}\\$
$
\ell(\pi;x_i) = \sum_{i=1}^{n}[log{x_i-1 \choose k-1}+klog(\pi)+(x_i-k)log(1-\pi)]\\
\frac{d\ell(\pi;x_i)}{d\pi} = \sum_{i=1}^{n}[\dfrac{k}{\pi}-\dfrac{(x_i-k)}{(1-\pi)}]$
Set this to zero,
$\frac{nk}{\pi}=\frac{\sum_{i=1}^nx_i-nk}{1-\pi}$
$\therefore$ $\hat\pi=\frac{nk}{\sum_{i=1}^nx}$
2.
For second part you need to use the theorem that $\sqrt{n}(\hat\theta-\theta) \overset{D}{\rightarrow}N(0,\frac{1}{I(\theta)})$, $I(\theta)$ is the fisher information here. Therefore,the standard deviation of the $\hat\theta$ will be $[nI(\theta)]^{-1/2}$. Or you call it as standard error since you use CLT here.
So we need to calculate the Fisher information for the negative binomial distribution.
$\frac{\partial^2 \log(P(x;\pi))}{\partial\pi^2}=-\frac{k}{\pi^2}-\frac{x-k}{(1-\pi)^2}$
$I(\theta)=-E(-\frac{k}{\pi^2}-\frac{x-k}{(1-\pi)^2})=\frac{k}{\pi^2}+\frac{k(1-\pi)}{(1-\pi)^2\pi}$
Note: $E(x) =\frac{k}{\pi}$ for the negative binomial pmf
Therefore, the standard error for $\hat \pi$ is $[n(\frac{k}{\pi^2}+\frac{k(1-\pi)}{(1-\pi)^2\pi})]^{-1/2}$
Simplify we get we get $se(\pi)=\sqrt{\dfrac{\pi^2(\pi-1)}{kn}}$
3.
The geometric distribution is a special case of negative binomial distribution when k = 1. Note $\pi(1-\pi)^{x-1}$ is a geometric distribution
Therefore, negative binomial variable can be written as a sum of k independent, identically distributed (geometric) random variables.
So by CLT negative binomial distribution will be approximately normal if the parameter k is large enough
It looks like you have made a mistake in your equations for the MOM estimators. It is best to first try solving these problems mathematically, rather than using computer code. For the MOM estimator, we can begin by observing that we have the following simple equations:
$$\frac{\mathbb{E}(X)}{\mathbb{V}(X)} = 1-p
\quad \quad \quad
\frac{\mathbb{E}(X)^2}{\mathbb{V}(X)} = rp.$$
Solving these equations for $p$ and $r$ gives:
$$p = 1-\frac{\mathbb{E}(X)}{\mathbb{V}(X)}
\quad \quad \quad
r = \frac{\mathbb{E}(X)^2}{\mathbb{V}(X)-\mathbb{E}(X)}.$$
Substituting the sample mean $\bar{X}$ and the sample variance $s_X^2$ gives the MOM estimators:
$$\hat{p} = 1-\frac{\bar{X}}{s_X^2}
\quad \quad \quad
\hat{r} = \frac{\bar{X}^2}{s_X^2-\bar{X}}.$$
Best Answer
Actually I'd like to disagree slightly with a previous answer. Yes it's true that the estimates themselves that come from a MLE are indeed invariant to transformations of the parameters, so it's correct that you can just take $\hat{\phi} = 1 + \hat{m}/\hat{r}$
However, the question also asked about the standard error estimate (or in more complete terms, the covariance matrix) of the estimates $\hat{\phi}$ and $\hat{m}$ as well. Those are not invariant, especially under a non-linear transformation such as this one. To compute a first order approximation to the standard error (or more properly, the covariance matrix) under such a change of variables, you need to compute a Jacobian. Section 2 of this reference describes how to use the Jacobian $J$ in order to transform the uncertainties; essentially $C^{'} = JCJ^{T}$. Alternatively, equations 3, 9, and 17 of this reference also sketch out the basic concept pretty well too. For a non-linear transformation of variables such as the one in this question, approximating it to first order using the Jacobian can sometimes give poor results. If an accurate error estimate is important to you, you may want to explore an unscented transform as well, which is an approximately equivalent numerical framework for performing the same types of calculations.