The problem gets simpler if one replace the fixed time $t$ by some random time $T_\theta$ which is independent of the Markov process and is Exponential with parameter $\lambda$.
Going back to the distribution of $Y_t$ needs to inverse a Laplace transform, which is sometimes difficult, and possible in the present situation.
Observe that for every bounded Borel function $h : \mathbb{R} \to \mathbb{R}$,
$$E[h(Y_{T_\theta})] = \int_0^\infty E[h(Y_t)] \theta e^{-\theta t} \mathrm{d}t.$$
I call $S_1 < S_2 < \ldots$ the jump times, and I set $S_0=0$, $D_1=S_1-S_0$, $D_2=S_2-S_1$,... and $N_t = \sup\{n \ge 0 : S_n \le t\}$.
The distribution of $N_t$ is Poisson($\lambda t$), and conditionally on $N_t=n$ and $X_0=x$, the density of $(S_1,\ldots,S_n)$ is
$$(s_1,\ldots,s_n) \mapsto \frac{n!}{t^n}1_{0<s_1<\ldots<s_n<t}.$$
I omit the index $\theta$ in what follows.
Let $h : \mathbb{R}^{n+1} \to \mathbb{R}$ be any bounded Borel function and $Z = h(D_1,\ldots,D_n,T-S_n)$.
\begin{eqnarray*}
E\big[Z1_{N_T = n}\big]
&=& \int_0^\infty E\big[h(D_1,\ldots,D_n,t-S_n)1_{N_t = n}\big] \theta e^{-\theta t} \mathrm{d}t \\
&=& \int_0^\infty E\big[h(S_1,\ldots,S_n-S_{n-1},t-S_n)1_{N_t=n}\big] \theta e^{-\theta t} \mathrm{d}t \\
&=& \int_0^\infty \frac{(\lambda t)^n}{n!}e^{-\lambda t} \Big( \int_{R^n} h(s_1,\ldots,s_n-s_{n-1},t-s_n) \frac{n!}{t^n} 1_{0<s_1<\ldots<s_n<t} ds_1 \ldots ds_n \Big) \theta e^{-\theta t}\mathrm{d}t \\
&=& \int_{R^{n+1}} \theta \lambda^n e^{-(\lambda+\theta)t} h(s_1,\ldots,s_n-s_{n-1},t-s_n) 1_{0<s_1<\ldots<s_n<t} ds_1 \ldots ds_ndt \\
&=& \int_{\mathbb{R}^{n+1}} \theta \lambda^n e^{-(\lambda+\theta)(t_1+\cdots+t_n+r)} h(t_1,\ldots,t_n,r) 1_{t_1>0;\ldots;t_n>0;r>0} ds_1 \ldots dt_n dr
\end{eqnarray*}
Hence
$$P[N_T=n] = \frac{\theta \lambda^n}{(\lambda+\theta)^{n+1}}$$
and conditionally on $[N_T=n;X_0=x]$, the random variables $D_1,\ldots,D_n,T-S_n$ are independent with distribution Exponential($\lambda + \theta$).
This enables to compute the distribution of $Y_T$.
Conditionally on $[N_T=n;X_0=x]$, $Y_T$ is the sum of $n_x := \lfloor (n+1+x)/2 \rfloor$ (respectively $\lfloor (N+2)/2 \rfloor$)
independent random variables with distribution Exponential($\lambda + \theta$). Note that $n_0$ and $n_1$ depend on $n$ and $n_0+n_1=n+1$.
The distribution of $Y_T$ given $x_0=x$ is
$$\sum_{n=0}^{+\infty}\frac{\theta\lambda^n}{(\lambda+\theta)^{n+1}} \Gamma(n_x,\lambda+\theta).$$ The distribution of $Y_T$ given $x_0=0$ is
$$\frac{\theta}{\lambda+\theta}d\delta_0(y) + \sum_{n=1}^{+\infty}\frac{\theta\lambda^n}{(n_0-1)!(\lambda+\theta)^{n_1}} 1_{y>0} y^{n_0-1}e^{-\lambda y} dy.$$
Observe that for all $y>0$
\begin{eqnarray*}
\frac{\theta\lambda^{n_1}}{(\lambda+\theta)^{n_1}}
e^{-\theta y}
&=& \theta e^{-\theta y} \int_0^\infty \frac{\lambda^{n_1}}{(n_1-1)!}s^{n_1-1}
e^{-(\lambda+\theta) s} ds \\
&=& \int_y^\infty \frac{\lambda^{n_1}}{(n_1-1)!}
(t-y) ^{n_1-1} e^{-\lambda (t-y)} \theta e^{-\theta t}dt \\
&=& \int_0^\infty 1_{t>y} \frac{\lambda^{n_1}}{(n_1-1)!}
(t-y) ^{n_1-1} e^{-\lambda (t-y)} \theta e^{-\theta t}dt
\end{eqnarray*}
Hence, dividing both sides by $\theta$ and inverting Laplace transform yields that the distribution of $Y_t$ given $X_0=0$ is
$$e^{-\lambda t} d\delta_0(y) + \sum_{n=1}^{+\infty} \lambda^{n}e^{-\lambda t} \frac{1}{(n_0-1)!(n_1-1)!} 1_{0<y<t} y^{n_0-1}(t-y) ^{n_1-1}dy,$$
namely
$$e^{-\lambda t} d\delta_0(y) + \sum_{n=1}^{+\infty} \frac{(\lambda t)^{n}e^{-\lambda t}}{n!} \frac{n!}{(n_0-1)!(n_1-1)!}1_{0<y<t}\frac{1}{t^n} y^{n_0-1}(t-y) ^{n_1-1}dy.$$
Conditionally on $[N_t=n;X_0=0]$, $Y_t/t$ follows a Beta distribution with parameters $n_0$ and $n_1$.
Best Answer
In the case where the CTMC is finite and time-homogeneous, i.e., $\sigma_t$ is independent of $t$ and $\lambda$ only depends on the current state (which I'll denote al $\lambda_j$ for state $j$), you can apply the CLT to the average waiting time in each state.
Formally, let $(i_n)_{n\in\mathbb{N}}$ be the sequence of indices s.t. we are in state $j$ after the $i_n$-th transition. Then $S_{j,n}=\sum_{k=0}^m \frac{T_{i_k}}{n}$ gives the average time for a transition from $j$, and by the CLT this approaches a normal distribution with expected value $1/\lambda_j$ and variance $1/(\lambda_j^2n)$, i.e., $S_{j,n}\approx \mathcal{N}(\frac{1}{\lambda_j},\frac{1}{\lambda_j^2n})$ for large $n$.
Moreover, you can compute the expected number of visits for each state by viewing your process as a discrete time Markov chain. Let $E_n(j)$ denote the number of times we expect to visit state $j$ within the first $n$ transitions (regardless of how much time they take). Let $P(j,k)$ denote the probability of being in state $j$ after $k$ transitions. Clearly $E_n(j)=\sum_{k=0}^nP(j,k)$. In the limit, $P(j,n)$ approaches the steady-state probabilities when viewing the system as a discrete time Morkov chain. So if we denote the steady-state probability of state $j$ as $E(j)$ we have $E_n(j)\approx nE(j)$ for large $n$.
Then the expected time of a run of length $n$ is just the expected number of times we visit state $j$, times the expected waiting time in state $j$, summed over all states:
$$\sum_{j\in S}nE(j)\mathcal{N}(\frac{1}{\lambda_j},\frac{1}{\lambda_j^2n})$$
for large $n$, which is just a mixture of normal distributions.