The likelihood-ratio (LR) test is not terribly useful in this situation. Your test can be simplified from your specified critical region by looking at possible regions in which the maximum value can fall. From the ordering in your critical region, it is clear that the p-value function for your test is:
$$p(\boldsymbol{x}) = \begin{cases}
\text{undefined} & & & \text{for } \theta_0 < x_{(n)}, \\
1 & & & \text{for } \theta_1 < x_{(n)} \leqslant \theta_0, \\
(\theta_1 / \theta_0)^n & & & \text{for } 0 \leqslant x_{(n)} \leqslant \theta_1. \\
\end{cases}$$
(In the case where $\theta_0 < x_{(n)}$ both hypotheses are falsified by the data, and your LR statistic is undefined, leading to an undefined p-value.)
We can see that, for any significance level $\alpha < (\theta_1 / \theta_0)^n$ the likelihood-ratio test accepts the null hypothesis under all possible observed outcomes (and is trivially UMP). For any significance level $\alpha > (\theta_1 / \theta_0)^n$, the test rejects the null if and only if $x_{(n)} \leqslant \theta_1$ (and it is again trivially UMP).
The problem with the LR test in this situation is that the LR is either zero or one, and does not have any gradations inside the range $0 \leqslant x_{(n)} \leqslant \theta_1$. This leads to a test with a binary p-value.
A better test to apply here (which does not satisfy the considitions of the Neyman-Pearson lemma, but is also UMP) is to impose an additional evidentiary ordering within the range $0 \leqslant x_{(n)} \leqslant \theta_1$ so that smaller values of $x_{(n)}$ are considered to be greater evidence for the alternative hypothesis. If we add this additional ordering we obtain the smoother p-value function:
$$p(\boldsymbol{x}) = \begin{cases}
\text{undefined} & & & \text{for } \theta_0 < x_{(n)}, \\
1 & & & \text{for } \theta_1 < x_{(n)} \leqslant \theta_0, \\
(x_{(n)} / \theta_0)^n & & & \text{for } 0 \leqslant x_{(n)} \leqslant \theta_1. \\
\end{cases}$$
This latter test has the benefit of avoiding a binary p=value, while maintaining the UMP condition (again trivially). Intuitively, it involves the specification of a lower observed maximum value being more conducive to a lower uppoer bound in the sampling distribution.
(1) Part & parcel of being a uniformly most powerful test for $H_0:\theta=\theta_0$ vs $H_1:\theta>\theta_0$ is being most powerful for $H_0:\theta=\theta_0$ vs $H_1:\theta=\theta_1$ for whichever $\theta_1>\theta$ you choose. So the tests are exactly the same. (But there isn't always a UMP test for one-sided alternative hypotheses. Testing hypotheses about the location parameter of a Cauchy with known scale is a standard example.)
(2) The Karlin-Rubin theorem tells you that there is a UMP test for a one-sided alternative hypothesis, & how to form it, when the density (or mass) function of the sufficient statistic has a monotone likelihood ratio. There's no caveat that its distribution must belong to an exponential family; rather if it does belong to the (full) exponential family it will have monotone likelihood ratio. The hypergeometric distribution provides an example of a test statistic whose distribution does not belong to the exponential family & yet whose mass function has a monotone likelihood ratio.
(3) I don't know of general methods for finding UMP tests other than those you've described. As noted above, they don't always exist; then restricting your search to UMP unbiased tests or locally most powerful tests might be of interest, as might showing that a test under consideration is admissible (i.e. there's no other test with greater power under all versions of the alternative).
Best Answer
If possible, suppose there exists a UMP test $\phi^*$ (say) of level $\alpha$ for testing $H_0:\theta=\theta_0$ vs $H_1:\theta\ne \theta_0$. Then $\phi^*$ will also be UMP level $\alpha$ for testing $H_0:\theta=\theta_0$ against $H_1':\theta>\theta_0$ as well as $H_1'':\theta<\theta_0$.
But a UMP level $\alpha$ test for $(H_0,H_1')$ is
$$ \phi_1(\mathbf X)=\begin{cases} 1 &,\text{ if }\frac{\sqrt n(\overline X-\theta_0)}{\sigma_0}>z_{\alpha} \\ 0 &,\text{ otherwise } \end{cases} $$
And that for $(H_0,H_1'')$ is
$$ \phi_2(\mathbf X)=\begin{cases} 1 &,\text{ if }\frac{\sqrt n(\overline X-\theta_0)}{\sigma_0}<-z_{\alpha} \\ 0 &,\text{ otherwise } \end{cases} $$
So the test functions $\phi^*$ and $\phi_1$ should coincide on the sets where $\phi_1$ is zero or one. Same goes for $\phi^*$ and $\phi_2$. Now suppose we observed a data $\mathbf X$ such that the observed value of $\frac{\sqrt n(\overline X-\theta_0)} {\sigma_0}$ exceeds $z_{\alpha}$. Then for such $\mathbf X$, we must have $\phi_1(\mathbf X)=1$ and $\phi_2(\mathbf X)=0$. This means that on the part of the sample space where $\frac{\sqrt n(\overline X-\theta_0)} {\sigma_0}>z_{\alpha}$, the test $\phi^*$ fails to coincide with both $\phi_1$ and $\phi_2$. Hence the contradiction.
This is pretty much the idea behind the nonexistence of a UMP test for $(H_0,H_1)$. Hence the LRT is not a UMP test; however it is a UMP unbiased (UMPU) test.