For unknown $\sigma^2$
$$\lambda=\left(\frac{\sum(X_i-\bar{X})^2}{\sum(X_i-\mu_0)^2}\right) =\left(\frac{\sum(X_i-\bar{X})^2}{\sum(X_i-\bar{X})^2+n(\bar{X}-\mu_0)^2}\right)$$
$$=\left(\frac{1}{1+\frac{n(\bar{X}-\mu_0)^2}{\sum(X_i-\bar{X})^2}}\right)$$
$$=\left(\frac{1}{1+\frac{n(\bar{X}-\mu_0)^2}{(n-1)\frac{1}{n-1}\sum(X_i-\bar{X})^2}}\right)$$
$$=\left(\frac{1}{1+\frac{T^2}{n-1}}\right)$$
where
$T^2=\frac{n(\bar{X}-\mu_0)^2}{\frac{1}{n-1}\sum(X_i-\bar{X})^2}$
Now reject $H_0$ if $\lambda \leq \lambda_0 $ $\Leftrightarrow$ $T^2>c$ $\Leftrightarrow$ $|T|>k$, $T\sim t(n-1)$
For known $\sigma^2$
$$\lambda=\left(\frac{(2\pi \sigma^2)^{-n/2} e^{-\frac{1}{2\sigma^2}\sum (X_i -\mu_0)^2}}{((2\pi \sigma^2)^{-n/2} e^{-\frac{1}{2\sigma^2}\sum (X_i -\bar{X})^2}}\right) $$
$$=e^{-\frac{1}{2\sigma^2} \left( \sum (X_i -\mu_0)^2 -\sum (X_i -\bar{X})^2 \right)}$$
$$=e^{-\frac{1}{2\sigma^2} \left( n(\bar{X}-\mu)^2) \right)}$$
$$=e^{-\frac{1}{2} \left( (\frac{\bar{X}-\mu}{\sigma / \sqrt{n}})^2) \right)}$$
now $\lambda \leq \lambda_0$ $\Leftrightarrow$
$$(\frac{\bar{X}-\mu}{\sigma/ \sqrt{n}})^2 >c$$
$\Leftrightarrow$
$$|Z|=|\frac{\bar{X}-\mu}{\sigma/ \sqrt{n}}| >c$$
$Z=\frac{\bar{X}-\mu}{\sigma/ \sqrt{n}}$
According to Casella & Berger's notation (Definition 8.2.1), the likelihood ratio test statistic for testing $H_0: \theta \in \Theta_0$ versus $H_1: \theta \in \Theta_0^c$ is
$$\lambda(\boldsymbol{x}) = \frac{\text{sup}_{\Theta_0}L(\theta \mid \boldsymbol{x})}{\text{sup}_{\Theta}L(\theta \mid \boldsymbol{x})}.$$
A likelihood ratio test (LRT) is any test that has a rejection region of the form $\{\boldsymbol{x}: \lambda(\boldsymbol{x}) \leq c\}$, where $c$ is any number satisfying $0 \leq c \leq 1$.
In the problem of testing $H_0: \theta \leq \theta_0$ versus $H_1: \theta > \theta_0$, where $X_1, \dots, X_n \overset{i.i.d.}{\sim} n(\theta, \sigma^2)$ and $\sigma^2$ is unknown, we have $\Theta_0 = \{(\theta, \sigma^2): \theta \leq \theta_0, \sigma^2 > 0\}$ and $\Theta = \{(\theta, \sigma^2): \theta \in \mathbb{R}, \sigma^2 > 0\}$, and the likelihood is given by
$$L(\theta, \sigma^2 \mid \boldsymbol{x}) = \left(\frac{1}{\sqrt{2\pi\sigma^2}}\right)^n \exp\left(-\frac{1}{2\sigma^2}\sum_{i=1}^n(x_i - \theta)^2\right).$$
The supremum in the denominator of $\lambda(\boldsymbol{x})$ is attained at the unrestricted MLE of $(\theta, \sigma^2)$, that is, $\hat{\theta} = \bar{x}$ and $\hat{\sigma}^2 = \frac{1}{n}\sum_{i=1}^n(x_i - \bar{x})^2$ (See Section 4.1 if this is unclear).
The supremum in the numerator of $\lambda(\boldsymbol{x})$ needs to be a bit more careful, since it is "restricted" to the parameter space $\Theta_0$. Specifically, if we observe $\bar{x} > \theta_0$ ("$>$" is more precise than "$\geq$" the solution), then $L(\theta, \sigma^2)$ cannot be maximized at the unrestricted MLE ($\hat{\theta} = \bar{x}, \hat{\sigma}^2 = \frac{1}{n}\sum_{i=1}^n(x_i - \bar{x})^2$) since $(\bar{x}, \hat{\sigma}^2) \notin \Theta_0$. In this case, the restricted MLE (with respect to $\Theta_0$) of $(\theta, \sigma^2)$ is taken at the boundary of the parameter space, that is, $\hat{\theta}_0 = \theta_0$ and $\hat{\sigma}_0^2 = \frac{1}{n}\sum_{i=1}^n(x_i - \theta_0)^2$. On the other hand, if we observe $\bar{x} \leq \theta_0$, then $L(\theta, \sigma^2)$ can be maximized at the unrestricted MLE ($\hat{\theta} = \bar{x}, \hat{\sigma}^2 = \frac{1}{n}\sum_{i=1}^n(x_i - \bar{x})^2$) since $(\bar{x}, \hat{\sigma}^2) \in \Theta_0$. In this case, the numerator and denominator coincides thus $\lambda(\boldsymbol{x}) = 1$.
To sum up, for $n(\theta, \sigma^2)$ where $\sigma^2$ is unknown, the LRT statistic of testing $H_0: \theta \leq \theta_0$ versus $H_1: \theta > \theta_0$ is given by
$$\lambda(\boldsymbol{x}) = \begin{cases} \dfrac{\text{sup}_{\Theta_0}L(\theta, \sigma^2 \mid \boldsymbol{x})}{\text{sup}_{\Theta}L(\theta, \sigma^2 \mid \boldsymbol{x})} = \dfrac{L(\theta_0, \hat{\sigma}_0^2 \mid \boldsymbol{x})}{L(\bar{x}, \hat{\sigma}^2 \mid \boldsymbol{x})} = \left(\dfrac{\hat{\sigma}^2}{\hat{\sigma}_0^2}\right)^{n/2} & \text{if } \bar{x} > \theta_0; \\ 1 & \text{if } \bar{x} \leq \theta_0. \end{cases}$$
Remark: It is worth to note the fact that
$$\exp\left(-\frac{1}{2\hat{\sigma}_0^2}\sum_{i=1}^n(x_i - \theta_0)^2\right) = \exp\left(-\frac{1}{2\hat{\sigma}^2}\sum_{i=1}^n(x_i - \bar{x})^2\right) = \exp\left(-\frac{n}{2}\right),$$
which cancels in the derivation of the LRT statistic $\lambda(\boldsymbol{x})$ for $\bar{x} > \theta_0$.
Best Answer
The critical region is simply of the form $|\overline X|>k$ where $k(>0)$ is a constant such that $$P_{H_0}(|\overline X|>k)=\alpha\,.$$
Noting that $\frac{\sqrt n(\overline X-\mu)}{\sigma}\sim N(0,1)$, you have
$$P_{H_0}\left(|\overline X|>k\right)=P_{H_0}\left(\left|\frac{\sqrt n\overline X}{\sigma}\right|>\frac{\sqrt nk}{\sigma}\right)=P\left(|Z|>\frac{\sqrt nk}{\sigma}\right)\,,\quad \text{ where }Z\sim N(0,1)\,.$$
Now $P\left(|Z|>\frac{\sqrt nk}{\sigma}\right)=\alpha$ means $\sqrt n k/\sigma=z_{\alpha/2}$, the upper quantile of a standard normal distribution. Therefore $k=\frac{\sigma z_{\alpha/2}}{\sqrt n}$ and the rejection region is written as $$\left|\frac{\sqrt n\overline X}{\sigma}\right|>z_{\alpha/2}$$
For $\alpha=0.05$, we have $z_{\alpha/2}\approx 1.96$.
But I think the power function by definition should be
\begin{align} \beta(\mu)&=P_{\mu}\left(\left|\frac{\sqrt n\overline X}{\sigma}\right|>z_{\alpha/2}\right) \\&=1-P_{\mu}\left(-z_{\alpha/2}\le \frac{\sqrt n\overline X}{\sigma}\le z_{\alpha/2}\right) \\&=1-P_{\mu}\left(-z_{\alpha/2}-\frac{\sqrt n\mu}{\sigma}\le \frac{\sqrt n(\overline X-\mu)}{\sigma} \le z_{\alpha/2}-\frac{\sqrt n\mu}{\sigma}\right) \\&=1-\left[\Phi\left(z_{\alpha/2}-\frac{\sqrt n\mu}{\sigma}\right)-\Phi\left(-z_{\alpha/2}-\frac{\sqrt n\mu}{\sigma}\right)\right] \\&=1-\Phi\left(\frac{\sqrt n\mu}{\sigma}+z_{\alpha/2}\right)+\Phi\left(\frac{\sqrt n\mu}{\sigma}-z_{\alpha/2}\right) \end{align}