Before trying to find a UMP test, one needs to first check if there exists one. To do this one needs to find the likelihood ratio function
$$l(x)=f_{\theta_1}(x)/f_{\theta_0}(x)$$
This function must be monotone non-decreasing in $x$ for every $\theta_1\geq \theta_0$. In the given question $\theta_1=2$, and the density function is $$f_{2}(x)=2x.$$ Similarly for $\theta_0\in[1/2,1]$, $$f_{\theta_0}(x)=\theta_0x^{\theta_0-1}$$ Hence, the likelihood ratio function is
$$l_{\theta_0}(x)=\frac{2x}{\theta_0x^{\theta_0-1}}=\frac{2}{\theta_0}x^{2-\theta_0}$$
Since this function is increasing in $x$ for all $\theta_0\in[1/2,1]$, there exists a UMP test of level $\alpha$.
By definition of UMP test, the significance level $\alpha$ is the expected value of the decision rule (which is the likelihood ratio test with a certain threshold $\lambda$), for which the false alarm probability lies below $\alpha$, for every $\theta_0$
$$\alpha=\sup_{\theta_0}\int_{\{x:l_{\theta_0}(x)>\lambda\}}f_{\theta_0}(x)\mathrm{d}x=\sup_{\theta_0}\int_{\{x:l_{\theta_0}(x)>\lambda\}}\theta_0x^{\theta_0-1}\mathrm{d}x$$
Now, we have a nice simplification (Why?) $${\{x:l_{\theta_0}(x)>\lambda\}}\equiv {\{x:x>\lambda^{'}\}}$$
Hence
$$\alpha=\sup_{\theta_0}\int_{\{x:l_{\theta_0}(x)>\lambda\}}\theta_0x^{\theta_0-1}\mathrm{d}x=\sup_{\theta_0}\int_{\lambda^{'}}^1\theta_0x^{\theta_0-1}\mathrm{d}x=\sup_{\theta_0}1-{\lambda^{'}}^{\theta_0}=0.05$$
It is known that $\lambda^{'}\in[0,1]$ and $\theta_0\in[1/2,1]$. Now what value of $\theta_0$ maximizes $1-{\lambda^{'}}^{\theta_0}$ or similarly minimizes ${\lambda^{'}}^{\theta_0}$?
The UMP test is then $$\phi(x)=\begin{cases}1,\quad x>\lambda^{'}\\0,\quad x\leq \lambda^{'}\end{cases}$$
Best Answer
Your observation is correct. Here, I'll help you sort out what's confusing you.
Now a source of confusion is that we have two different tests, and the question is which of the two tests $\big (T_A, R_A \big)$ and $\big (T_{NP}, R_{NP} \big)$ is more powerful. The answer is that these two test are equivalent and both have the same power. Indeed,
$$ T_A \in R_A \color{blue}{\Leftrightarrow} T_{NP} \in R_{NP},$$
which means that $H_0$ is rejected/accepted based on the test $\big (T_A, R_A \big)$ if and only if it is rejected/accepted based on the test $\big (T_{NP}, R_{NP} \big).$
General case:
If the likelihood ratio statistic can be written as $H(T)$ where $T$ is a statistic and $H$ is a strictly monotone function, we can construct a simple test of the form $\big (T, R=(c,\infty) \big)$ when $H$ is increasing and of the form $\big (T, R=(-\infty,c) \big)$ when $H$ is decreasing.
Question: What is role of $\theta_0$ and $\theta_1$? Answer: The values of the parameters $\theta_0$ and $\theta_1$ determine whether the function $H$ is increasing or decreasing, and thus, the form of rejection region $R$ depends on parameters $\theta_0$ and $\theta_1$. For example, if in the OP, we have $\theta_0\neq 0$ and $\theta_1= 0$, then the rejection region becomes of the form $R=(-\infty,c)$, that is, to reject $H_0$ when $X<c$
The method you used to construct a test is more popular since it is very simple to work with the resulting test. However, if the likelihood ratio statistic is not a monotone function of a statistic, we cannot use it, for example, we cannot use it for $X \sim N(\theta,\theta^2)$.