@Nate: I think your argument is fine. I didn't know the Ferinique's theorem. My argument on that part was this one.
Fix $T>0$ (as you did eventually $T+1$). Applying Jensen we get:
$$\exp\left(\frac{1}{2}\int_S^{S+\epsilon} X_t^2\,dt\right)=\exp\left(\frac{1}{\epsilon}\int_S^{S+\epsilon} \frac{\epsilon}{2}X_t^2\,dt\right)\leq \frac{1}{\epsilon}\int_S^{S+\epsilon} \exp\left(\frac{\epsilon}{2}X_t^2\right)\,dt \qquad a.s..$$
Now taking the expectation and using Tonelli's theorem we have to study:
$$\frac{1}{\epsilon}\int_S^{S+\epsilon} E\left[\exp\left(\frac{\epsilon}{2}X_t^2\right)\right]\,dt.$$
$X_t \sim N(\mu_t=X_0e^{-t}\,,\,\,\sigma_t^2=\frac{1-e^{-2t}}{2})$ so
$$E\left[\exp\left(\frac{\epsilon}{2}X_t^2\right)\right]=E\left[\exp\left(\frac{\epsilon}{2}(\mu_t+\sigma_tZ)^2\right)\right]=$$
$$=e^{\frac{\epsilon}{2}\mu_t^2}\int_{\mathbb{R}} \frac{1}{\sqrt{2 \pi}}\, \exp\left(-\frac{x^2}{2}(1-\sigma_t^2 \epsilon)+\epsilon \mu_t \sigma_t x\right)dx. $$
Setting $\lambda_t=1-\epsilon \sigma_t^2=\frac{1}{2}[2-\epsilon(1-e^{-2t})]$, if $\lambda_t>0$ (for example with $\epsilon < 1$) the last integral is convergent and its value is:$$\exp\left(\frac {\epsilon}{2}\mu_t^2\right)\,\exp\left(\frac{\epsilon^2}{2 \lambda_t}\mu_t^2 \sigma_t^2\right)\frac{1}{\sqrt{\lambda_t}}.$$
Finally all the functions involved are continuous and since $\epsilon < 1$, $\lambda_t$ is away from 0 and so the moment generating function is integrable.
My first idea was to use instead of $\epsilon$ $T$, but moment generating function of the chi squared is not define to away from 0 but only in a neighbourhood.
I will prove it for an Ito process of the type
$$X_t = X_0 + \int_0^ta_udu + \int_0^tb_udW_u, \quad 0 \leq t \leq T. \tag{1}$$
We will need the following two theorems.
Theorem 1. Let M and N be local martingales. Then the sum $M+N$ is also a local martingale.
Theorem 2. Let $c \mathscr{M}_{0,loc}$ denote the space of continuous
local martingale that starts at zero. Let $M \in c \mathscr{M}_{0,loc}$ be of finite variation. Then $M \equiv 0$ a.s.
Answer: Let $X$ be a continuous local martingale of the type SDE(1). Then the drift must be zero.
Proof.
We know the process $\int_0^t b_udW_u, 0 \leq t \leq T,$ is a continuous local martingale. Then the process
$$A_t := \int_0^ta_udu= X_t - X_0 - \int_0^tb_udW_u, \quad 0 \leq t \leq T $$
is a sum of continous local martingales and, by Theorem 1, is also a continuous local martingale. Also, note that $A_0 = 0,$ so $A \in c \mathscr M_{0,loc}^2.$ But the process $A$ is of finite variation, so by Theorem 2, must be zero. This is the same as saying that $\int_0^ta_udu=0$ for all $0 \leq t \leq T.$ Since this integral is defined path by path, by elementary calculus we have that $a_t = 0$ for all $0 \leq t \leq T.$
Best Answer
Interchanging a derivative with an expectation or an integral can be done using the dominated convergence theorem. Here is a version of such a result.
Lemma. Let $X\in\mathcal{X}$ be a random variable $g\colon \mathbb{R}\times \mathcal{X} \to \mathbb{R}$ a function such that $g(t, X)$ is integrable for all $t$ and $g$ is continuously differentiable w.r.t. $t$. Assume that there is a random variable $Z$ such that $|\frac{\partial}{\partial t} g(t, X)| \leq Z$ a.s. for all $t$ and $\mathbb{E}(Z) < \infty$. Then $$\frac{\partial}{\partial t} \mathbb{E}\bigl(g(t, X)\bigr) = \mathbb{E}\bigl(\frac{\partial}{\partial t} g(t, X)\bigr).$$
Proof. We have $$\begin{align*} \frac{\partial}{\partial t} \mathbb{E}\bigl(g(t, X)\bigr) &= \lim_{h\to 0} \frac1h \Bigl( \mathbb{E}\bigl(g(t+h, X)\bigr) - \mathbb{E}\bigl(g(t, X)\bigr) \Bigr) \\ &= \lim_{h\to 0} \mathbb{E}\Bigl( \frac{g(t+h, X) - g(t, X)}{h} \Bigr) \\ &= \lim_{h\to 0} \mathbb{E}\Bigl( \frac{\partial}{\partial t} g(\tau(h), X) \Bigr), \end{align*}$$ where $\tau(h) \in (t, t+h)$ exists by the mean value theorem. By assumption we have $$\Bigl| \frac{\partial}{\partial t} g(\tau(h), X) \Bigr| \leq Z$$ and thus we can use the dominated convergence theorem to conclude $$\begin{equation*} \frac{\partial}{\partial t} \mathbb{E}\bigl(g(t, X)\bigr) = \mathbb{E}\Bigl( \lim_{h\to 0} \frac{\partial}{\partial t} g(\tau(h), X) \Bigr) = \mathbb{E}\Bigl( \frac{\partial}{\partial t} g(t, X) \Bigr). \end{equation*}$$ This completes the proof.
In your case you would have $g(t, X) = \int_0^t f(X_s) \,ds$ and a sufficient condition to obtain $\frac{d}{dt} \mathbb{E}(Y_t) = \mathbb{E}\bigl(f(X_t)\bigr)$ would be for $f$ to be bounded.
If you want to take the derivative only for a single point $t=t^\ast$, boundedness of the derivative is only required in a neighbourhood of $t^\ast$. Variants of the lemma can be derived by using different convergence theorems in place of the dominated convergence theorem, e.g. by using the Vitali convergence theorem.