Additional comments: Your answer seems OK. It may be of interest to know that
$\hat \theta$ is not unbiased. One can get a rough idea of the distribution
of $\hat \theta$ for a particular $\theta$ by simulating many samples of
size $n.$ I don't know of a convenient 'unbiasing' constant multiple.
The Wikipedia article I linked in my Comment above gives more information.
Here is a simulation for $n = 10$ and $\theta = 5.$
th = 5; n = 10
th.mle = -n/replicate(10^6, sum(log(rbeta(n, th, 1))))
mean(th.mle)
## 5.555069 # aprx expectation of th.mle > th = 5.
median(th.mle)
## 5.172145
The histogram below shows the simulated distribution of $\hat \theta.$
The vertical red line is at the mean of that distribution, and the green
curve is its kernel density estimator (KDE). According to the KDE, its mode is near $4.62.$
den.inf = density(th.mle)
den.inf$x[den.inf$y==max(den.inf$y)]
## 4.624876
hist(th.mle, br=50, prob=T, col="skyblue2", main="")
abline(v = mean(th.mle), col="red")
lines(density(th.mle), lwd=2, col="darkgreen")
Addendum on Parametric Bootstrap Confidence Interval for $\theta:$
In order to find a confidence interval (CI) for $\theta$ based on MLE $\hat \theta,$ we would like to know the distribution of $V = \frac{\hat \theta}{\theta}.$ When that distribution is not
readily available, we can use a parametric bootstrap.
If we knew the distribution of $V,$ then we could find numbers $L$ and $U$ such that
$P(L \le V = \hat\theta/\theta \le U) = 0.95$ so that a 95% CI would be of the form
$\left(\frac{\hat \theta}{U},\, \frac{\hat\theta}{L}\right).$ Because we do not know the distribution of $V$ we use a bootstrap procedure to get serviceable approximations $L^*$ and $U^*$ of $L$ and $U.$ respectively.
To begin, suppose we have a random sample of size $n = 50$ from $\mathsf{Beta}(\theta, 1)$
where $\theta$ is unknown and its observed MLE is $\hat \theta = 6.511.$
Entering, the so-called 'bootstrap world'. we take repeated 're-samples` of size $n=50$
from $\mathsf{Beta}(\hat \theta =6.511, 0),$ Then we we find the bootstrap
estimate $\hat \theta^*$ from each re-sample. Temporarily using the observed
MLE $\hat \theta = 6.511$ as a proxy for the unknown $\theta,$ we find a large number $B$ of re-sampled values $V^* = \hat\theta^2/\hat \theta.$ Then we use quantiles .02 and .97 of
these $V^*$'s as $L^*$ and $U^*,$ respectively.
Returning to the 'real world'
the observed MLE $\hat \theta$ returns to its original role as an estimator, and the
95% parametric bootstrap CI is $\left(\frac{\hat\theta}{U^*},\, \frac{\hat\theta}{L^*}\right).$
The R code, in which re-sampled quantities are denoted by .re
instead of $*$, is shown below.
For this run with set.seed(213)
the 95% CI is $(4.94, 8.69).$ Other runs with unspecified
seeds using $B=10,000$ re-samples of size $n = 50$ will give very similar values. [In a real-life application, we would not know whether this CI covers the 'true' value of $\theta.$ However,
I generated the original 50 observations using parameter value $\theta = 6.5,$ so in this demonstration we
do know that the CI covers the true parameter value $\theta.$ We could have used the
probability-symmetric CI with quantiles .025 and .975, but the one shown is a little shorter.]
set.seed(213)
B = 10000; n = 50; th.mle.obs=6.511
v.re = th.mle.obs/replicate(B, -n/sum(log(rbeta(n,th.mle.obs,1))))
L.re = quantile(v.re, .02); U.re = quantile(v.re, .97)
c(th.mle.obs/U.re, th.mle.obs/L.re)
## 98% 3%
## 4.936096 8.691692
I don't understand your solution, so I'm doing it myself here.
Assume $\theta > 0$. Setting $y_i = |x_i|$ for $i = 1, \dots, n$, we have
$$\begin{align}
L(\theta)=\prod_{i=1}^{n}f_{X_i}(x_i)&=\prod_{i=1}^{n}\left(\dfrac{1}{2\theta}\right)\mathbb{I}_{[-\theta, \theta]}(x_i) \\
&=\left(\dfrac{1}{2\theta}\right)^n\prod_{i=1}^{n}\mathbb{I}_{[-\theta, \theta]}(x_i) \\
&= \left(\dfrac{1}{2\theta}\right)^n\prod_{i=1}^{n}\mathbb{I}_{[0, \theta]}(|x_i|) \\
&= \left(\dfrac{1}{2\theta}\right)^n\prod_{i=1}^{n}\mathbb{I}_{[0, \theta]}(y_i)\text{.}
\end{align}$$
Assume that $y_i \in [0, \theta]$ for all $i = 1, \dots, n$ (otherwise $L(\theta) = 0$ because $\mathbb{I}_{[0, \theta]}(y_j) = 0$ for at least one $j$, which obviously does not yield the maximum value of $L$). Then I claim the following:
Claim. $y_1, \dots, y_n \in [0, \theta]$ if and only if $\max_{1 \leq i \leq n}y_i = y_{(n)} \leq \theta$ and $\min_{1 \leq i \leq n}y_i = y_{(1)}\geq 0$.
I leave the proof up to you. From the claim above and observing that $y_{(1)} \leq y_{(n)}$, we have
$$L(\theta) = \left(\dfrac{1}{2\theta}\right)^n\prod_{i=1}^{n}\mathbb{I}_{[0, \theta]}(y_i) = \left(\dfrac{1}{2\theta}\right)^n\mathbb{I}_{[0, y_{(n)}]}(y_{(1)})\mathbb{I}_{[y_{(1)}, \theta]}(y_{(n)}) \text{.}$$
Viewing this as a function of $\theta > 0$, we see that $\left(\dfrac{1}{2\theta}\right)^n$ is decreasing with respect to $\theta$. Thus, $\theta$ needs to be as small as possible to maximize $L$. Furthermore, the product of indicators
$$\mathbb{I}_{[0, y_{(n)}]}(y_{(1)})\mathbb{I}_{[y_{(1)}, \theta]}(y_{(n)}) $$
will be non-zero if and only if $\theta \geq y_{(n)}$. Since $y_{(n)}$ is the smallest value of $\theta$, we have
$$\hat{\theta}_{\text{MLE}} = y_{(n)} = \max_{1 \leq i \leq n} y_i = \max_{1 \leq i \leq n }|x_i|\text{,}$$
as desired.
Best Answer
\begin{align} \mathcal{L}(\theta) &= \prod_{i=1}^n c(\theta)(1-\exp(-|X_i|))1\{|X_i|\leq\theta\} \\ &=c^n(\theta)1\!\left\{\max_{1\le i\le n}|X_i|\le \theta\right\} \prod_{i=1}^n (1-\exp(-|X_i|)). \end{align} Since $c(\theta)$ is decreasing in $\theta$ but the indicator is $0$ when $\max_{1\le i\le n}|X_i|>\theta$, the MLE of $\theta$ is $$ \hat{\theta}_n=\max_{1\le i\le n}|X_i|. $$ Now, for $0<t<n\theta$ (note that $0\le \hat{\theta}_n\le \theta$), \begin{align} \mathsf{P}\!\left(n(\hat{\theta}_n-\theta)\le -t\right)&=\mathsf{P}\!\left(\max_{1\le i\le n}|X_i|\le \theta-\frac{t}{n}\right)=\left[\mathsf{P}\!\left(|X_1|\le \theta-\frac{t}{n}\right)\right]^n \\ &=\left[\left(e^{-\theta+\frac{t}{n}}+\theta-\frac{t}{n}-1\right)\left(\theta+e^{-\theta}-1\right)^{-1}\right]^n \\ &\to \exp\left(-\frac{t\left(e^{\theta}-1\right)}{e^{\theta}(\theta -1)+1}\right) \end{align} as $n\to\infty$. Thus, the distribution of $n(\theta-\hat{\theta}_n)$ converges to the gamma distribution with shape parameter $1$ and scale parameter: $$ \frac{e^{\theta}(\theta-1)+1}{e^{\theta}-1}. $$