The power is $1-\beta$, where $\beta$ is the probability of Type II error, that is, the probability of not rejecting the null hypothesis.
For a normal population with known variance, the power function would be
$\pi(\mu) = 1 - P(-z_{\alpha/2}-\dfrac{\mu-\mu_0}{\sigma/\sqrt{n}}<Z<z_{\alpha/2}-\dfrac{\mu-\mu_0}{\sigma/\sqrt{n}})$,
where $Z\sim N(0,1)$, $n$ is the size of the sample, $\sigma$ the standard deviation (sq. root of the variance) and $z_{\alpha/2}=\Phi^{-1}(1-\alpha/2)$, the inverse of standard normal cdf. Also, in your case your hypothesis is that $\mu_0=0$.
The probability is calculated by means of the standard normal cdf $\Phi(\cdot)$ :
$\pi(\mu) = 1 - \Phi(z_{\alpha/2}-\dfrac{\mu-\mu_0}{\sigma/\sqrt{n}}) + \Phi(-z_{\alpha/2}-\dfrac{\mu-\mu_0}{\sigma/\sqrt{n}})$.
If $\alpha=0.05$, that is, confidence intervals at 95%, you have $z_{\alpha/2}=1.9599639845\approx 1.96$
According to Casella & Berger's notation (Definition 8.2.1), the likelihood ratio test statistic for testing $H_0: \theta \in \Theta_0$ versus $H_1: \theta \in \Theta_0^c$ is
$$\lambda(\boldsymbol{x}) = \frac{\text{sup}_{\Theta_0}L(\theta \mid \boldsymbol{x})}{\text{sup}_{\Theta}L(\theta \mid \boldsymbol{x})}.$$
A likelihood ratio test (LRT) is any test that has a rejection region of the form $\{\boldsymbol{x}: \lambda(\boldsymbol{x}) \leq c\}$, where $c$ is any number satisfying $0 \leq c \leq 1$.
In the problem of testing $H_0: \theta \leq \theta_0$ versus $H_1: \theta > \theta_0$, where $X_1, \dots, X_n \overset{i.i.d.}{\sim} n(\theta, \sigma^2)$ and $\sigma^2$ is unknown, we have $\Theta_0 = \{(\theta, \sigma^2): \theta \leq \theta_0, \sigma^2 > 0\}$ and $\Theta = \{(\theta, \sigma^2): \theta \in \mathbb{R}, \sigma^2 > 0\}$, and the likelihood is given by
$$L(\theta, \sigma^2 \mid \boldsymbol{x}) = \left(\frac{1}{\sqrt{2\pi\sigma^2}}\right)^n \exp\left(-\frac{1}{2\sigma^2}\sum_{i=1}^n(x_i - \theta)^2\right).$$
The supremum in the denominator of $\lambda(\boldsymbol{x})$ is attained at the unrestricted MLE of $(\theta, \sigma^2)$, that is, $\hat{\theta} = \bar{x}$ and $\hat{\sigma}^2 = \frac{1}{n}\sum_{i=1}^n(x_i - \bar{x})^2$ (See Section 4.1 if this is unclear).
The supremum in the numerator of $\lambda(\boldsymbol{x})$ needs to be a bit more careful, since it is "restricted" to the parameter space $\Theta_0$. Specifically, if we observe $\bar{x} > \theta_0$ ("$>$" is more precise than "$\geq$" the solution), then $L(\theta, \sigma^2)$ cannot be maximized at the unrestricted MLE ($\hat{\theta} = \bar{x}, \hat{\sigma}^2 = \frac{1}{n}\sum_{i=1}^n(x_i - \bar{x})^2$) since $(\bar{x}, \hat{\sigma}^2) \notin \Theta_0$. In this case, the restricted MLE (with respect to $\Theta_0$) of $(\theta, \sigma^2)$ is taken at the boundary of the parameter space, that is, $\hat{\theta}_0 = \theta_0$ and $\hat{\sigma}_0^2 = \frac{1}{n}\sum_{i=1}^n(x_i - \theta_0)^2$. On the other hand, if we observe $\bar{x} \leq \theta_0$, then $L(\theta, \sigma^2)$ can be maximized at the unrestricted MLE ($\hat{\theta} = \bar{x}, \hat{\sigma}^2 = \frac{1}{n}\sum_{i=1}^n(x_i - \bar{x})^2$) since $(\bar{x}, \hat{\sigma}^2) \in \Theta_0$. In this case, the numerator and denominator coincides thus $\lambda(\boldsymbol{x}) = 1$.
To sum up, for $n(\theta, \sigma^2)$ where $\sigma^2$ is unknown, the LRT statistic of testing $H_0: \theta \leq \theta_0$ versus $H_1: \theta > \theta_0$ is given by
$$\lambda(\boldsymbol{x}) = \begin{cases} \dfrac{\text{sup}_{\Theta_0}L(\theta, \sigma^2 \mid \boldsymbol{x})}{\text{sup}_{\Theta}L(\theta, \sigma^2 \mid \boldsymbol{x})} = \dfrac{L(\theta_0, \hat{\sigma}_0^2 \mid \boldsymbol{x})}{L(\bar{x}, \hat{\sigma}^2 \mid \boldsymbol{x})} = \left(\dfrac{\hat{\sigma}^2}{\hat{\sigma}_0^2}\right)^{n/2} & \text{if } \bar{x} > \theta_0; \\ 1 & \text{if } \bar{x} \leq \theta_0. \end{cases}$$
Remark: It is worth to note the fact that
$$\exp\left(-\frac{1}{2\hat{\sigma}_0^2}\sum_{i=1}^n(x_i - \theta_0)^2\right) = \exp\left(-\frac{1}{2\hat{\sigma}^2}\sum_{i=1}^n(x_i - \bar{x})^2\right) = \exp\left(-\frac{n}{2}\right),$$
which cancels in the derivation of the LRT statistic $\lambda(\boldsymbol{x})$ for $\bar{x} > \theta_0$.
Best Answer
The result does not depend on wheter $\sigma$ is known or not, but in practice, how can we even hope to calculate $\frac{\bar X-\theta}{\sigma / \sqrt{n}}$, when $\sigma$ is unknown to us? A proper statistic should only rely on data and not on unknown parameters.
When $\sigma$ is unknown, the usual approach would be to replace $\sigma$ by the estimated standard deviation $S= \sqrt{\frac{1}{n-1}\sum_{k=1}^n (X_k - \bar{X})^2}$ and create a new statistic $T=\frac{\bar X-\theta}{S/\sqrt{n}}$, however this new statistic does not have a normal distribution. It has a t-distribution with $n-1$ degrees of freedom.