The question was answered with a link in a comment, so let me just give here the argument from the link, for future completeness.
We assume a statistical model for data $X$ is parameterized by a parameter $\theta$ (which can be scalar or vector, or even more general). Let the likelihood function be $L(\theta)$ and the value of $\theta$ maximizing that be the maximum likelihood estimator $\hat{\theta}$ (mle). We do assume that estimator exists and is unique. Wanted is the mle of $g(\theta)$, a function of $\theta$. First we assume that $g$ is one-to-one. Then we can write
$$
L(\theta) = L(g^{-1}(g(\theta))
$$
and both functions are clearly maximized by $\hat{\theta}$, so
$$
\hat{\theta} = g^{-1}(\hat{g(\theta)})
$$ or
$$
g(\hat{\theta}) = \hat{g(\theta)}
$$
If $g$ is many-to-one, then $\hat{\theta}$ which maximizes $L(\theta)$ still corresponds to $g(\hat{\theta})$, so $g(\hat{\theta})$ still corresponds to the maximum of $L(\theta)$. (Argument paraphrased from the link in the comment above).
Working with extrema requires care, but it doesn't have to be difficult. The crucial question, found near the middle of the post, is
... we need to show that $\frac{n+1}{n}{\hat \theta}^{n}$ is unbiased.
Earlier you obtained
$$\hat\theta = \max(-y_1, y_n/2) = \max\{-\min\{y_i\}, \max\{y_i\}/2\}.$$
Although that looks messy, the calculations become elementary when you consider the cumulative distribution function $F$. To get started with this, note that $0\le \hat\theta \le \theta$. Let $t$ be a number in this range. By definition,
$$\eqalign{
F(t) &= \Pr(\hat\theta\le t) \\&= \Pr(-y_1 \lt t\text{ and }y_n/2 \le t) \\
&= \Pr(-t \le y_1 \le y_2 \cdots \le y_n \le 2t).
}$$
This is the chance that all $n$ values lie between $-t$ and $2t$. Those values bound an interval of length $3t$. Because the distribution is uniform, the probability that any specific $y_i$ lies in this interval is proportional to its length:
$$\Pr(y_i \in [-t, 2t]) = \frac{3t}{3\theta} = \frac{t}{\theta}.$$
Because the $y_i$ are independent, these probabilities multiply, giving
$$F(t) = \left(\frac{t}{\theta}\right)^n.$$
The expectation can immediately be found by integrating the survival function $1-F$ over the interval of possible values for $\hat\theta$, $[0, \theta]$, using $y=t/\theta$ for the variable:
$$\mathbb{E}(\hat\theta) = \int_0^\theta \left(1 - \left(\frac{t}{\theta}\right)^n\right)dt = \int_0^1 (1-y^n)\theta dy = \frac{n}{n+1}\theta.$$
(This formula for the expectation is derived from the usual integral via integration by parts. Details are given at the end of https://stats.stackexchange.com/a/105464.)
Rescaling by $(n+1)/n$ gives
$$\mathbb{E}\left(\frac{n+1}{n}\,{\hat \theta}\right) = \theta,$$
QED.
Best Answer
I suggest you draw the likelihood as a function of $\theta$, without forgetting that $1/\theta$ must be greater than any observation (i.e. what are the bounds on $\theta$?).
Keep in mind that everything but $\theta^{2n}$ in the likelihood is a constant, and so you can write it as $c.\theta^{2n}$; so just draw $\cal{L}/c$ over the domain of $\theta$. You may find it more convenient to deal with the log-likelihood, or you may not.
It should help you clarify what you're doing.
If you're still stuck, consider thinking in terms of $\psi = 1/\theta$ and then go back to doing it in terms of $\theta$.
(Alternatively - look back at that uniform example. What would the MLE of $\theta$ be if the data were uniform on $[0,1/\theta]$? Can you see how to do the original problem now?)