To calculate the likelihood ratio test, you first calculate the maximum likelihood of your full assumed model. So you'll pretend that the triple $(\alpha, \beta, \sigma^{2})$ are all unknown, and use either analytic or numerical methods to compute the MLE estimator for these parameters given your data, by maximizing the expression you provided for $L(\alpha,\beta,\sigma^{2})$.
For convenience, let $\hat{\theta}_{\textrm{F}} = (\alpha_{F}, \sigma_{F}^{2}; \beta_{F})$, where 'F' is meant to stand for 'Full' since you're using the full set of parameters when figuring out the MLE.
Next, assume that your null hypothesis is correct and that $\beta=0$. Then let $\hat{\theta}_{R} = (\alpha_{R}, \sigma_{R}^{2}; 0)$, where we plug in the null value of $\beta$ and then estimate the MLE with that fixed assumption. The 'R' here stands for 'Restricted' since we're estimating the MLE with the extra restriction on $\beta$.
Then with this notation, the likelihood ratio test statistic is given by
$$ LR = 2\cdot{}\biggl( L(\hat{\theta}_{F}) - L(\hat{\theta}_{R})\biggr).$$
Assuming the null hypothesis is true, and for large values of $N$ (large sample sizes), then $LR$ has a $\chi^{2}$ distribution with degrees of freedom equal to $K_{0}$ where $K_{0}$ is the dimension of the set of parameters being restricted by the hypothesis. In this case, you're just restricting the value of a single scalar, $\beta$, so $K_{0}$ is 1. But if your null hypothesis involved multiple variables, the degrees of freedom wold change accordingly.
The idea behind this test is that if the null hypothesis is true, then the value of the likelihood function shouldn't be much different when you find the unrestricted MLE vs. when you find the MLE with the null-hypothesis-restriction applied.
The way that the large sample distribution is proved is by looking at the convergence of the (negative) outer product of the likelihood derivatives and the convergence of the (negative) Hessian matrix of the likelihood. Both of these converge to the Information Matrix, and so you can basically do a series expansion of the likelihood function around the true parameter. Cutting off the series expansion at the quadratic term, and looking at the difference at the restricted parameter estimate vs. the unrestricted parameter estimate lets you show that it is distributed as $\chi^{2}(K_{0})$.
To construct the UMP test, we have to construct the corresponding MP test. Hence, the LR is given by
\begin{align}
\frac{L_{1}(X_1,..,X_n|\lambda_1)}{L_{1}(X_1,..,X_n|\lambda_0)} = \frac{\prod_{i=1}^n \frac{e^{-\lambda_1} \lambda_1^{x_i}}{x_i!}}{\prod_{i=1}^n \frac{e^{-\lambda_0} \lambda_0^{x_i}}{x_i!}} &= \exp\{n(\lambda_0 - \lambda_1)\}\left(\frac{\lambda_1}{\lambda_0}\right)^{\sum_i^nx_i}\\
&= \exp\{n(\lambda_0 - \lambda_1)\}\left(\frac{\lambda_1}{\lambda_0}\right)^{n\bar{x}_n} > c\,.
\end{align}
This statistic depends on the distribution of $\bar{X}_n$, hence for large enough $n$ we can use the CLT to approximate the rejection region. Note that the MP is $\Psi(\mathrm{X}) = \mathcal{I}\left( \bar{X}_n >c' \right)$.
2.
\begin{align}
\alpha = \mathbb{E}_{\lambda_0}\Psi(\mathrm{X}) &= \mathbb{P}_{\lambda_0}\left( \bar{X}_n >c' \right)\\
&a\approx 1-\phi\left(\frac{c'-\lambda_0}{\sqrt{\lambda_0/n}} \right)\\
&c' = \lambda_0 + Z_{1-\alpha}\sqrt{\lambda_0/n}.
\end{align}
For $\lambda_0 = 1$,
$$
c' = 1 + Z_{1-\alpha}\sqrt{1/n}\,.
$$
- Power function for the parametric space $\Lambda = \mathbb{R}^+$.
\begin{align}
\pi(\Psi(\mathrm{X})|\Lambda) &= \mathbb{E}_{\Lambda}\Psi(\mathrm{X}) = \mathbb{P}_{\Lambda}(\bar{X}_n>c')\\
&=1-\phi\left( \frac{c'-\lambda}{\sqrt{\lambda/n}} \right),& \forall \lambda \in \Lambda.
\end{align}
Best Answer
You have to simplify the likelihood ratio statistic and study the nature of the resulting function (often easier to consider the ratio as a function of a sufficient statistic) to find the cutoff point subject to a level/size restriction.
The likelihood function given the sample $(x_1,\ldots,x_n)\in\mathbb R^n$ is $$L(\sigma)=\frac{1}{(\sigma\sqrt{2\pi})^n}\exp\left[-\frac{1}{2\sigma^2}\sum_{i=1}^nx_i^2\right]\quad,\,\sigma>0$$
Unrestricted MLE of $\sigma$ is $$\hat\sigma=\sqrt{\frac{T}{n}}\quad,\,T=\sum_{i=1}^n x_i^2$$
So the LR test statistic is
\begin{align} \Lambda(x_1,\ldots,x_n)&=\frac{L(\sigma_0)}{L(\hat\sigma)} \\\\&=\left(\frac{T}{n\sigma_0^2}\right)^{n/2}e^{-\frac{1}{2}\left(T/\sigma_0^2-n\right)} \end{align}
Therefore, for some $k$, $$\Lambda(x_1,\ldots,x_n)<c\implies\underbrace{ n\ln T-T/\sigma_0^2}_{g(T)}<k$$
You can verify that $g$ is a concave function from $g''(T)<0$.
If you sketch a rough plot of $g$, you will see that for some $(c_1,c_2)$ with $c_1<c_2$,
$$g(T)<k\implies T<c_1\quad\text{ or }\quad T>c_2$$
Under $H_0$, $$T/\sigma_0^2 \sim \chi_n^2$$
And your $(c_1,c_2)$ is such that $$P_{H_0}(T<c_1)+P_{H_0}(T>c_2)=\alpha$$, and $$g(c_1)=g(c_2)$$