So, suppose that we are Martians and know nothing about the binomial distribution; we know only that we have a parameter $q\geq 1$ and a formula describing the following probabilities
$$P(X=i)=\binom niq^{-i}\left(1-\frac1q\right)^{n-i}.\tag 1$$
($i=0,1,\cdots, n.$)
Now, assume that the outcome of our experiment is $X=0$.
Surprisingly, we are familiar with the maximum likelihood method. So, we apply it. We have to find the $q$ that maximizes
$$\left(1-\frac1q\right)^n.$$
Apparently, for any finite $q$ there is a better one. That is $q=\infty$ seems to be the maximum likelihood estimate.
Now, we suddenly learn what the binomial distribution is. We immediately conclude that $p=0$ is the solution for the "true earthly parameter." Away we sail then immediately.
EDIT
Let's try to find the maximum likelihood parameter $q\geq1$ in the case of $n$ experiments and $i$ successful outcomes assuming that the distribution is given by $(1)$. We can forget about the multiplier $\binom ni$. So, after dividing $(1)$ by $\binom ni$ take the derivative of $(1)$ with respect to $q$. And set the derivative equal to zero then solve the equation for $q$.
Here is the equation
$$(n-i)q^{-i-2}\left(1-\frac1q\right)^{n-i-1}=iq^{-i-1}\left(1-\frac1q\right)^{n-i}.$$
We will have to exclude $q=1$ from now on. However $q=1$ is certainly the solution for $n=i$. Divide both sides by $q^{-i-1}\left(1-\frac1q\right)^{n-i}$. The resulting equation is
$$(n-i)q^{-1}\left(1-\frac1q\right)^{-1}=i.$$
from here we get the expected result:
$$\hat q=\frac ni.$$
NOTE
You can see here that the MLE does have the invariance property. So it is true that if $\frac in$ is the MLE for $p$ then for $q=\frac1p$ the MLE is $\frac ni$. I did the proof above for you and I because I don't believe if theorems (invariance property this time) whose proof I've never digested.
Calculate
$$
\mathcal{L} = \prod_{i=1}^np(x_i|\theta) = \theta^{2n}e^{-\theta\sum_{i=1}^nx_i}\prod_{i=1}^n x_i \underbrace{u(x_i)}_{\color{blue}{1,}~~{\rm c.f.}~ x_i>0}
$$
So that
$$
\ln\mathcal{L} = 2n\ln\theta -\theta \sum_{i=1}^nx_i + \sum_{i=1}^nx_i =2n\ln\theta -n \theta \bar{x} + n\bar{x}
$$
where $\bar{x} = n^{-1}\sum_{i=1}^nx_i$. Taking the derivative of this last expression you get
$$
\frac{{\rm d}\mathcal{L}}{{\rm d}\theta} = -n\bar{x} + \frac{n}{\theta}
$$
So that the ML estimate of theta is
$$
\hat{\theta} = \frac{1}{\bar{x}}
$$
Best Answer
Here the CDF is the thing you are estimating. You can think of its values as an infinite number of parameters (in a constrained space that says they need to comprise a right-continuous, nondecreasing function, between zero and yada yada yada).
Let's say we get $X_1=3$ and $X_2=4.$ We need to find the CDF that maximizes the probability of this data. It's pretty clear that anything other than an atom at $3$ and an atom at $4$ is a waste of real estate. Let $p$ be the mass at $3$ and $(1-p)$ the mass at $4$. Then we want to maximize $p(1-p),$ so we get $p=1/2$ (What else could it have been?)
This generalizes to putting $1/n$ mass at each of the points $X_1,\ldots, X_n.$