The density for each observation is $\displaystyle f_{X_i}(x) = \begin{cases} 1/2a & \text{if } -a\le x\le a, \\ 0 & \text{if } x<-a \text{ or } x>a. \end{cases}$
You didn't say your observations were independent but I will assume that was intended. The joint density is therefore
$$
f_{X_1,\ldots,X_n} (x_1,\ldots,x_n) = \begin{cases} 1/(2a)^n & \text{if for every } i\in\{1,\ldots,n\} \text{ we have } -a\le x_i \le a, \\ 0 & \text{otherwise}. \end{cases}
$$
The condition that for every $i\in\{1,\ldots,n\}$ we have $-a\le x_i\le a$ is the same as $\min\{x_1,\ldots,x_n\} \ge -a$ and $\max\{x_1,\ldots,x_n\}\le a.$ That condition on $\min$ is the same as $-\min\{x_1,\ldots,x_n\} \le a.$ So we need $a\ge\max$ and $a\ge -\min.$ I leave it as an exercise to show that
$$
\Big( a\ge \max\{x_1,\ldots,x_n\} \text{ and } a\le -\min\{x_1,\ldots,x_n\} \Big) \text{ if and only if } a \ge \max\{|x_1|,\ldots,|x_n|\}.
$$
Therefore the likelihood function is
$$
L(a) = \begin{cases} 1/(2a)^n & \text{if } a \ge \max\{|x_1|,\ldots,|x_n|\}, \\ 0 & \text{otherwise.} \end{cases}
$$
Now notice that $L(a)$ increases as $a$ decreases, until $a$ gets down to $\max\{|x_1|,\ldots,|x_n|\}.$ Therefore that maximum is the MLE.
Here the CDF is the thing you are estimating. You can think of its values as an infinite number of parameters (in a constrained space that says they need to comprise a right-continuous, nondecreasing function, between zero and yada yada yada).
Let's say we get $X_1=3$ and $X_2=4.$ We need to find the CDF that maximizes the probability of this data. It's pretty clear that anything other than an atom at $3$ and an atom at $4$ is a waste of real estate. Let $p$ be the mass at $3$ and $(1-p)$ the mass at $4$. Then we want to maximize $p(1-p),$ so we get $p=1/2$ (What else could it have been?)
This generalizes to putting $1/n$ mass at each of the points $X_1,\ldots, X_n.$
Best Answer
So, suppose that we are Martians and know nothing about the binomial distribution; we know only that we have a parameter $q\geq 1$ and a formula describing the following probabilities
$$P(X=i)=\binom niq^{-i}\left(1-\frac1q\right)^{n-i}.\tag 1$$ ($i=0,1,\cdots, n.$) Now, assume that the outcome of our experiment is $X=0$.
Surprisingly, we are familiar with the maximum likelihood method. So, we apply it. We have to find the $q$ that maximizes
$$\left(1-\frac1q\right)^n.$$
Apparently, for any finite $q$ there is a better one. That is $q=\infty$ seems to be the maximum likelihood estimate.
Now, we suddenly learn what the binomial distribution is. We immediately conclude that $p=0$ is the solution for the "true earthly parameter." Away we sail then immediately.
EDIT
Let's try to find the maximum likelihood parameter $q\geq1$ in the case of $n$ experiments and $i$ successful outcomes assuming that the distribution is given by $(1)$. We can forget about the multiplier $\binom ni$. So, after dividing $(1)$ by $\binom ni$ take the derivative of $(1)$ with respect to $q$. And set the derivative equal to zero then solve the equation for $q$.
Here is the equation
$$(n-i)q^{-i-2}\left(1-\frac1q\right)^{n-i-1}=iq^{-i-1}\left(1-\frac1q\right)^{n-i}.$$
We will have to exclude $q=1$ from now on. However $q=1$ is certainly the solution for $n=i$. Divide both sides by $q^{-i-1}\left(1-\frac1q\right)^{n-i}$. The resulting equation is
$$(n-i)q^{-1}\left(1-\frac1q\right)^{-1}=i.$$
from here we get the expected result:
$$\hat q=\frac ni.$$
NOTE
You can see here that the MLE does have the invariance property. So it is true that if $\frac in$ is the MLE for $p$ then for $q=\frac1p$ the MLE is $\frac ni$. I did the proof above for you and I because I don't believe if theorems (invariance property this time) whose proof I've never digested.