Let us rewrite $x_t, x_{t-1}, \dots, x_{t-K+1}$ in terms of $x_{t-K}$
$$x_t=c\left(1+\varphi+\dots+\varphi^{K-1}\right)+\varepsilon_t+\varphi\varepsilon_{t-1}+\dots+\varphi^{K-1}\varepsilon_{t-K+1}+\varphi^Kx_{t-K}$$
$$x_{t-1}=c\left(1+\varphi+\dots+\varphi^{K-2}\right)+\varepsilon_{t-1}+\varphi\varepsilon_{t-2}+\dots+\varphi^{K-2}\varepsilon_{t-K+1}+\varphi^{K-1}x_{t-K}$$
$$\dots$$
$$x_{t-K+1}=c+\varphi x_{t-K}+\varepsilon_{t-K+1}$$
Then we denote a moving average process of $x_t$ of window $K$ as $\tilde{x}_{t}^K$ and have the following
$$\tilde{x}_t^K=\frac{1}{K}\left[\sum_{i=0}^{K-1}\left(c(K-i)\varphi^i+\varepsilon_{t-i}\sum_{j=0}^i\varphi^j\right)+x_{t-K}\sum_{i=1}^K\varphi^i\right]$$
$$\begin{align}
\operatorname{\mathbb{V}ar}\left(\tilde{x}^K_t\right) &= \frac{1}{K^2}\left[\operatorname{\mathbb{V}ar}\left(\sum_{i=0}^{K-1}\varepsilon_{t-i}\sum_{j=0}^i\varphi^j\right)+\operatorname{\mathbb{V}ar}\left(x_{t-K}\sum_{i=1}^K\varphi^i\right)\right]\\
&= \frac{1}{K^2}\left[\sigma^2_{\varepsilon}\sum^{K-1}_{i=0}\left(\frac{\varphi^{i+1}-1}{\varphi-1}\right)^2+\frac{\sigma^2_{\varepsilon}}{1-\varphi^2}\left(\frac{\varphi^{K+1}-\varphi}{\varphi-1}\right)^2\right]\\
&=\frac{\sigma^2_{\varepsilon}}{K^2}\left[\frac{1}{(\varphi-1)^2}\sum^{K-1}_{i=0}\left(\varphi^{i+1}-1\right)^2+\frac{\varphi^2(\varphi^K-1)^2}{(1-\varphi)^2(1-\varphi^2)}\right]
\end{align}$$
and some more algebra leads to..
$$\operatorname{\mathbb{V}ar}\left(\tilde{x}^K_t\right)=\frac{\sigma^2_{\varepsilon}\left(\varphi^2(K\varphi-K+2)-2\varphi^{1+K}(\varphi-1)+K-\varphi(K+2)\right)}{(1+\varphi)(1-\varphi)^4K^2}$$
sigma <- 2.5
phi <- 0.6
K <- 3
const <- 2
set.seed(321)
eps <- rnorm(1e5, sd = sigma)
x <- filter(c(0, const + eps), filter = phi, method = "recursive")
MAvar <- function(phi, sigma, K)
sigma^2 / (K^2 * (phi + 1) * (1 - phi)^4) *
(phi^2 * (K * phi - K + 2) - 2 * phi^(1 + K) * (phi - 1) + K - phi * (K + 2))
library(zoo)
ma <- rollmean(x, K)
var(ma)
# [1] 6.67111
MAvar(phi, sigma, K)
# [1] 6.640625
whuber mentioned in the comments that I just need to show that $\log(g_t)$ has a normal distribution.
Since this is a standard AR(1) process we know that $\epsilon^g_t \sim N(0, \sigma^2_g)$ is i.i.d. I used a simple example for this, without the messy constant term:
$$
\log(g_t) = \rho_g \log(g_{t-1}) + \epsilon^g_t
$$
That means that (working forward from $g_0$):
\begin{align}
\log(g_1) &= \rho_g \log(g_0) + \epsilon^g_1
\end{align}
and then
\begin{align}
\log(g_2) &= \rho_g \log(g_1) + \epsilon^g_2 \\
&= \rho_g \left(\rho_g \log(g_0) + \epsilon^g_1 \right) + \epsilon^g_2 \\
&= \rho_g^2 \log(g_0) + \rho_g \epsilon^g_1 + \epsilon^g_2
\end{align}
etc. so
\begin{align}
\log(g_t)
&= \rho_g^t \log(g_0) + \rho_g^{t-1} \epsilon^g_1 + \rho_g^{t-2} \epsilon^g_2 + \dots + \epsilon^g_t\\
\end{align}
so $\log(g_t)$ is just a linear combination of independent, normally distributed random variables (the $\epsilon_t$), so it has a normal distribution.
Once you know that $\log(g_t)$ has a normal distribution, you know that $g_t$ has a log-normal distribution, so you can look up the properties of the distribution and get the mean and variance that way.
Best Answer
$$\text{Var}(y_t)=\text{Var}(\phi y_{t-1}) + \text{Var}(\varepsilon_{t}).$$ As we know, $E(\varepsilon_{t}^2)=\sigma^2$. Then we have: $$\text{Var}(y_t)=\text{Var}(\phi y_{t-1}) + \sigma^2.$$ Now using variance properties we take out $\phi$ from the variance: $$\text{Var}(y_t)=\phi^2\text{Var}(y_{t-1}) + \sigma^2.$$ Given that $\text{Var}(y_t)=\text{Var}(y_{t-1})$ we solve to get: $$\text{Var}(y)=\frac{\sigma^2}{1-\phi^2}.$$