Before trying to find a UMP test, one needs to first check if there exists one. To do this one needs to find the likelihood ratio function
$$l(x)=f_{\theta_1}(x)/f_{\theta_0}(x)$$
This function must be monotone non-decreasing in $x$ for every $\theta_1\geq \theta_0$. In the given question $\theta_1=2$, and the density function is $$f_{2}(x)=2x.$$ Similarly for $\theta_0\in[1/2,1]$, $$f_{\theta_0}(x)=\theta_0x^{\theta_0-1}$$ Hence, the likelihood ratio function is
$$l_{\theta_0}(x)=\frac{2x}{\theta_0x^{\theta_0-1}}=\frac{2}{\theta_0}x^{2-\theta_0}$$
Since this function is increasing in $x$ for all $\theta_0\in[1/2,1]$, there exists a UMP test of level $\alpha$.
By definition of UMP test, the significance level $\alpha$ is the expected value of the decision rule (which is the likelihood ratio test with a certain threshold $\lambda$), for which the false alarm probability lies below $\alpha$, for every $\theta_0$
$$\alpha=\sup_{\theta_0}\int_{\{x:l_{\theta_0}(x)>\lambda\}}f_{\theta_0}(x)\mathrm{d}x=\sup_{\theta_0}\int_{\{x:l_{\theta_0}(x)>\lambda\}}\theta_0x^{\theta_0-1}\mathrm{d}x$$
Now, we have a nice simplification (Why?) $${\{x:l_{\theta_0}(x)>\lambda\}}\equiv {\{x:x>\lambda^{'}\}}$$
Hence
$$\alpha=\sup_{\theta_0}\int_{\{x:l_{\theta_0}(x)>\lambda\}}\theta_0x^{\theta_0-1}\mathrm{d}x=\sup_{\theta_0}\int_{\lambda^{'}}^1\theta_0x^{\theta_0-1}\mathrm{d}x=\sup_{\theta_0}1-{\lambda^{'}}^{\theta_0}=0.05$$
It is known that $\lambda^{'}\in[0,1]$ and $\theta_0\in[1/2,1]$. Now what value of $\theta_0$ maximizes $1-{\lambda^{'}}^{\theta_0}$ or similarly minimizes ${\lambda^{'}}^{\theta_0}$?
The UMP test is then $$\phi(x)=\begin{cases}1,\quad x>\lambda^{'}\\0,\quad x\leq \lambda^{'}\end{cases}$$
NP lemma tells you to reject $H_0:\theta=a$ in favour of $H_1:\theta=b$ for large values of the ratio $r(x)=f_b(x)/f_a(x)$. So compute $r(x)$ for every $x$:
\begin{array}{|c|c|c|c|c|}
\hline x&1&2&3&4\\
\hline r(x)&3/2& 3/2 & 2 & 1/5\\
\hline
\end{array}
As you can see, $$r(3)>r(1)=r(2)>r(4) \tag{$\star$}$$
By NP lemma, an MP (or UMP) size $0.1$ test takes the form
$$\phi(x)=\begin{cases}
1&, \text{ if }x\in R_1
\\ \gamma &, \text{ if }x\in R_2
\\ 0&, \text{ otherwise }
\end{cases}
$$
where $\gamma \in[0,1]$ and the regions $R_1,R_2$ are such that
$$E_{H_0}\phi(X)=P_{H_0}(X\in R_1)+\gamma P_{H_0}(X\in R_2)=0.1$$
Further, $R_1,R_2$ are decided keeping in mind the order in $(\star)$. That is to say, the sample point $3$ will be the first to enter the rejection region, followed by $1$ and/or $2$, and finally $4$.
Here you can only choose $R_1=\{3\}$, because $P_{H_0}(X=3)=\frac1{12}=0.0833<0.1$ and the probabilities of $X$ taking the other values under $H_0$ already exceed $0.1$.
As for $R_2$, you can take $R_2=\{1\}$ or $R_2=\{2\}$ or $R_2=\{1,2\}$. This gives you 3 UMP tests, each with a different $\gamma$ found subject to the size $0.1$ restriction. You can check that all three tests have the same power $E_{H_1}\phi(X)$. The solution you mention takes $R_2=\{1,2\}$.
Lastly, 't' distribution is completely off-topic here and your attempt to bring this into the solution makes no sense.
Best Answer
You confuse the notion of "antiderivative" and "definite integral".
In the question you referenced, the answer implies that ANTIDERIVATIVE can be negative. For example, as you know:
$$\int f(x) = F(x) + C$$
where $C$ can be anything, like a negative number. For example, if we take $C = -4$, then
$$\int 0 dx = C = -4$$
which is correct since we found a FUNCTION s.t. $(-4)' = 0$.
But this is not the same as in Casella-Berger. C&B refers to the DEFINITE integral over all sample points in $x \in \chi$ (check full definition in 8.3.3).
In our case above, if we add limits of integration, and assume $F(x) = C = -4$:
$$\int_{a}^b 0 dx = F(x)|_a^b = (-4)|_a^b = (-4) - (-4) = 0$$
Thus, we have shown the the definite integral and the antiderivative are not the same thing: the former is a number (which is zero if the integrand is zero), the latter is a function (which can be negative even if the integrand is zero).
Now, as for why it should be greater than zero, we can write the Riemann integral:
$$\int_a^b f(x)dx = \lim_{n \rightarrow \infty} \sum_{i = 1}^n (x_i - x_{i-1})f(x_i)$$
where $\{x_0, .., x_n\}$ is a partition of the interval $(a, b)$. Then, if we know that $f(x) \geq 0$, then
$$\sum_{i = 1}^n (x_i - x_{i-1})f(x_i) \geq \sum_{i = 1}^n (x_i - x_{i-1})0 = 0$$
from where
$$\int_a^b f(x)dx = \lim_{n \rightarrow \infty}\sum_{i = 1}^n (x_i - x_{i-1})f(x_i) \geq \lim_{n \rightarrow \infty}\sum_{i = 1}^n (x_i - x_{i-1})0 = 0$$
So, in Casella & Berger they do the same, but with $f(x | \theta_i)$ over the sample space of $x \in \chi$ so the inequality holds.