Suppose $L(\sigma^2\mid x_1,\ldots,x_n)$ is the likelihood function given the sample $(x_1,\ldots,x_n)$.
The unrestricted MLE of $\sigma^2$ is $$\hat\sigma^2=\frac1n \sum\limits_{i=1}^n(x_i-\mu)^2$$
The likelihood ratio statistic is defined as
\begin{align}
\Lambda(x_1,\ldots,x_n)&=\frac{\sup_{\sigma^2=\sigma_0^2}L(\sigma^2\mid x_1,\ldots,x_n)}{\sup_{\sigma^2>\sigma_0^2}L(\sigma^2\mid x_1,\ldots,x_n)}
\\&=\frac{L(\sigma_0^2\mid x_1,\ldots,x_n)}{L(\tilde\sigma^2\mid x_1,\ldots,x_n)}\,,
\end{align}
where $\tilde\sigma^2$ is the restricted MLE of $\sigma^2$ when $\sigma^2>\sigma_0^2$.
It can be argued that
$$\tilde\sigma^2=\begin{cases}\sigma_0^2 &,\text{ if }\hat\sigma^2\le \sigma_0^2 \\ \hat\sigma^2&,\text{ if }\hat\sigma^2> \sigma_0^2\end{cases}$$
Therefore,
$$\Lambda(x_1,\ldots,x_n)=\begin{cases}1 &,\text{ if }\hat\sigma^2\le \sigma_0^2 \\ \frac{L(\sigma_0^2\mid x_1,\ldots,x_n)}{L(\hat\sigma^2\mid x_1,\ldots,x_n)}&,\text{ if }\hat\sigma^2> \sigma_0^2\end{cases}$$
When $\Lambda=1$, we trivially fail to reject $H_0$. For the other case, we reject $H_0$ for small values of $\Lambda$.
So the critical region is of the form $\Lambda<c$ for some $c$ when $\hat\sigma^2>\sigma_0^2$.
You would find that $\Lambda<c \iff \sum\limits_{i=1}^n (x_i-\mu)^2>k$ where $k$ is such that $$P_{H_0}\left(\sum\limits_{i=1}^n(X_i-\mu)^2>k\right)=\alpha$$
Since $\frac{1}{\sigma_0^2}\sum\limits_{i=1}^n(X_i-\mu)^2\sim \chi^2_n$ under $H_0$, we must have $k=\sigma_0^2\cdot\chi^2_{\alpha,n}$ where $\chi^2_{\alpha,n}$ is the $(1-\alpha)$th quantile of a $\chi^2_n$ distribution. So the likelihood ratio test rejects $H_0$ when $\sum\limits_{i=1}^n (X_i-\mu)^2>\sigma_0^2\cdot\chi^2_{\alpha,n}$
This is in fact the same test you would get while searching for a most powerful test using Neyman-Pearson lemma. But if you are asked to derive a likelihood ratio test, you should stick to its own method.
Best Answer
The p-value you calculated is correct. $\alpha=5\%$ is given to compare the p-value you found with it.
As $p_{value}<\alpha$ you reject $H_0$