Suppose you have $n = 10$ trials with $x$ successes and you want
to test $H_0: p = 1/2$ vs $H_a: p \ne 1/2$ at (somewhere near) the 5% level.
I say 'somewhere near' because the binomial distribution is discrete, so
it is not possible in general to achieve an exact significance level.
Under $H_0,$ (that, is assuming the null hypothesis to be true) the number of successes $X \sim \mathsf{Binom}(n=10,\, p = 1/2).$
In your last inequality the fraction on the left-hand side is smallest when $\hat p = X/n$ is far from $1/2.$ So you need to reject when the number of successes $X$ is far from $n/2.$
n = 10; x = 1:(n-1)
p = x/n; frac=.5^n/(p^x*(1-p)^(n-x))
plot(x, frac, pch=19)
Accordingly, we might reject for $X = 0,1,9,10,$ the four values most removed from $10/2 = 5.$ A calculation using the binomial PDF gives $P(X \le 2) = P(X \ge 9) = 0.0214.$ So that rejection rule leads to a test at about the 2% level.
If we try to reject for $X = 0,1,2,8,9,10,$ then the significance level escalates to $0.109,$ so you would be testing at about the 11% level. If you want to keep the significance level below 5%, then you'll have to use
the rule to reject for $X = 0,1,9,10.$
Here is a graph of the relevant binomial PDF:
Computations using R statistical software:
rej = c(0,1,9,10); sum(dbinom(rej, 10, .5))
[1] 0.02148438
rej = c(0,1,2,8,9,10); sum(dbinom(rej, 10, .5))
[1] 0.109375
Note: For larger values of $n,$ one might approximate binomial probabilities using a normal distribution, but $n = 10$ is a bit too small for completely satisfactory normal approximations.
Assuming you observe $\mathbf Y=(Y_1,Y_2,\ldots,Y_n)$ where $Y_i\sim N(\theta x_i,1)$ independently for all $i$ and $x_i$ is fixed.
Indeed, MLE of $\theta$ is given by $$\hat\theta(\mathbf Y)=\frac{\sum_{i=1}^n x_i Y_i}{\sum_{i=1}^n x_i^2}$$
By the reproductive property of normal distribution, we have an exact distribution for the MLE:$$\hat\theta\sim N\left(\theta,\frac{1}{\sum_{i=1}^n x_i^2}\right)$$
In other words, $$\sqrt{\sum_{i=1}^n x_i^2}\left(\hat\theta-\theta\right)\sim N(0,1)$$
Using this pivot, a $100(1-\alpha)\%$ confidence interval for $\theta$ is $$I=\left[\hat\theta-\frac{z_{\alpha/2}}{\sqrt{\sum_{i=1}^n x_i^2}},\hat\theta+\frac{z_{\alpha/2}}{\sqrt{\sum_{i=1}^n x_i^2}}\right]$$
That is, $$P_{\theta}[\theta\in I]=1-\alpha\quad,\forall\,\theta$$
Or, $$P_{\theta}[\theta\in I^c]=\alpha\quad,\forall\,\theta$$
So for some $\theta_0$, $$P_{\theta_0}[\theta\in I^c]=\alpha$$
This gives the following critical region of a size $\alpha$ test for testing $H_0:\theta=\theta_0$ against $H_1:\theta\ne\theta_0$:
$$\left\{\mathbf Y:\hat\theta(\mathbf Y)<\theta_0-\frac{z_{\alpha/2}}{\sqrt{\sum_{i=1}^n x_i^2}}\quad\text{ or }\quad \hat\theta(\mathbf Y)>\theta_0+\frac{z_{\alpha/2}}{\sqrt{\sum_{i=1}^n x_i^2}}\right\}$$
Other tests can be derived of course but this gives you a test directly using the confidence interval $I$.
Best Answer
You're given the test-statistic and the decision rule for the test, which is reject $H_0$ if $y_{max} > k$. Now you want to find $k$ such that the test has size $\alpha$; that is, you want to find the minimum $k$ such that the probability of rejecting the null if it is true is less than $\alpha$.
In math, you want to find the smallest $k$ such that $$ P(y_{max} > k \, | \, \theta = \theta_0) \le \alpha. $$ The probability above is $$ P(y_{max} > k \, | \, \theta = \theta_0) = 1 - P(y_{max} \le k \, | \, \theta = \theta_0) = 1 - P(Y_1 \le k)^n = 1 - \left( \frac{k}{\theta_0} \right)^n. $$ Notice that the second equality follows because the maximum of a random sample is less than $k$ if and only if all of the $Y_i$ are less than $k$, and since they're all independent this is just $P(Y_1 \le k)^n$.
Alright, so now you can solve for $k$ and get $$ k \ge \theta_0 (1-\alpha)^{1/n}. $$ This inequality gives all the possible values of $k$ such that the test has size $\alpha$. You actually wanted the $k$ that maximizes the power. That $k$ should be $\theta_0 (1-\alpha)^{1/n}$ (since it is the smallest $k$ such that the test has size $\alpha$).