The likelihood function is $$L(\theta|\mathbb x)=\begin{cases}\dfrac{1}{\theta ^n},\,\,\,\theta \le x_i \le 2\theta ,\forall i\\0,\,\,\,\,\,\,\,\,\text{otherwise}\end{cases}$$ $$=\begin{cases}\dfrac{1}{\theta ^n},\,\,\,\theta \le x_{(1)} \le x_{(n)} \le2\theta \\0,\,\,\,\,\,\,\,\,\text{otherwise}\end{cases}$$
For $\theta \ge \dfrac {x_{(n)}}{2},L(\theta|\mathbb x)=\dfrac{1}{\theta ^n}$ is a decreasing function in $\theta$. Hence MLE of $\theta$ is $\color{blue}{\hat\theta=\dfrac{X_{(n)}}{2}}$.
So, suppose that we are Martians and know nothing about the binomial distribution; we know only that we have a parameter $q\geq 1$ and a formula describing the following probabilities
$$P(X=i)=\binom niq^{-i}\left(1-\frac1q\right)^{n-i}.\tag 1$$
($i=0,1,\cdots, n.$)
Now, assume that the outcome of our experiment is $X=0$.
Surprisingly, we are familiar with the maximum likelihood method. So, we apply it. We have to find the $q$ that maximizes
$$\left(1-\frac1q\right)^n.$$
Apparently, for any finite $q$ there is a better one. That is $q=\infty$ seems to be the maximum likelihood estimate.
Now, we suddenly learn what the binomial distribution is. We immediately conclude that $p=0$ is the solution for the "true earthly parameter." Away we sail then immediately.
EDIT
Let's try to find the maximum likelihood parameter $q\geq1$ in the case of $n$ experiments and $i$ successful outcomes assuming that the distribution is given by $(1)$. We can forget about the multiplier $\binom ni$. So, after dividing $(1)$ by $\binom ni$ take the derivative of $(1)$ with respect to $q$. And set the derivative equal to zero then solve the equation for $q$.
Here is the equation
$$(n-i)q^{-i-2}\left(1-\frac1q\right)^{n-i-1}=iq^{-i-1}\left(1-\frac1q\right)^{n-i}.$$
We will have to exclude $q=1$ from now on. However $q=1$ is certainly the solution for $n=i$. Divide both sides by $q^{-i-1}\left(1-\frac1q\right)^{n-i}$. The resulting equation is
$$(n-i)q^{-1}\left(1-\frac1q\right)^{-1}=i.$$
from here we get the expected result:
$$\hat q=\frac ni.$$
NOTE
You can see here that the MLE does have the invariance property. So it is true that if $\frac in$ is the MLE for $p$ then for $q=\frac1p$ the MLE is $\frac ni$. I did the proof above for you and I because I don't believe if theorems (invariance property this time) whose proof I've never digested.
Best Answer
In general the method of MLE is to maximize $L(\theta;x_i)=\prod_{i=1}^n(\theta,x_i)$. See here for instance. In case of the negative binomial distribution we have
$$L(p;x_i) = \prod_{i=1}^{n}{x_i + r - 1 \choose k}p^{r}(1-p)^{x_i}\\$$
$$ \ell(p;x_i) = \sum_{i=1}^{n}\left[\log{x_i + r - 1 \choose k}+r\log(p)+x_i\log(1-p)\right]$$ $$\frac{d\ell(p;x_i)}{dp} = \sum_{i=1}^{n}\left[\dfrac{r}{p}-\frac{x_i}{1-p}\right]=\sum_{i=1}^{n} \dfrac{r}{p}-\sum_{i=1}^{n}\frac{x_i}{1-p}$$
Set it to zero and add $\sum_{i=1}^{n}\frac{x_i}{1-p}$ on both sides.
$$\sum_{i=1}^{n} \dfrac{r}{p}=\sum_{i=1}^{n}\frac{x_i}{1-p}$$
$$\frac{nr}{p}=\frac{\sum\limits_{i=1}^nx_i}{1-p}\Rightarrow \hat p=\frac{\frac{1}{\sum x_i}}{\frac{1}{n r}+\frac{1}{\sum x_i}}\Rightarrow \hat p=\frac{r}{\overline x+r}$$
Now we have to check if the mle is a maximum. For this purpose we calculate the second derivative of $\ell(p;x_i)$.
$$\frac{d^2\ell(p;x_i)}{dp^2}=\underbrace{-\frac{rn}{p^2}}_{<0}\underbrace{-\frac{\sum\limits_{i=1}^n x_i}{(1-p)^2}}_{<0}<0\Rightarrow \hat p\textrm{ is a maximum}$$