Interchanging a derivative with an expectation or an integral can be done using the dominated convergence theorem. Here is a version of such a result.
Lemma. Let $X\in\mathcal{X}$ be a random variable $g\colon \mathbb{R}\times \mathcal{X} \to \mathbb{R}$ a function such that $g(t, X)$ is integrable for all $t$ and $g$ is continuously differentiable w.r.t. $t$. Assume that there is a random variable $Z$ such that $|\frac{\partial}{\partial t} g(t, X)| \leq Z$ a.s. for all $t$ and $\mathbb{E}(Z) < \infty$. Then
$$\frac{\partial}{\partial t} \mathbb{E}\bigl(g(t, X)\bigr)
= \mathbb{E}\bigl(\frac{\partial}{\partial t} g(t, X)\bigr).$$
Proof. We have
$$\begin{align*}
\frac{\partial}{\partial t} \mathbb{E}\bigl(g(t, X)\bigr)
&= \lim_{h\to 0} \frac1h \Bigl( \mathbb{E}\bigl(g(t+h, X)\bigr) - \mathbb{E}\bigl(g(t, X)\bigr) \Bigr) \\
&= \lim_{h\to 0} \mathbb{E}\Bigl( \frac{g(t+h, X) - g(t, X)}{h} \Bigr) \\
&= \lim_{h\to 0} \mathbb{E}\Bigl( \frac{\partial}{\partial t} g(\tau(h), X) \Bigr),
\end{align*}$$
where $\tau(h) \in (t, t+h)$ exists by the mean value theorem.
By assumption we have
$$\Bigl| \frac{\partial}{\partial t} g(\tau(h), X) \Bigr| \leq Z$$
and thus we can use the dominated convergence theorem to conclude
$$\begin{equation*}
\frac{\partial}{\partial t} \mathbb{E}\bigl(g(t, X)\bigr)
= \mathbb{E}\Bigl( \lim_{h\to 0} \frac{\partial}{\partial t} g(\tau(h), X) \Bigr)
= \mathbb{E}\Bigl( \frac{\partial}{\partial t} g(t, X) \Bigr).
\end{equation*}$$
This completes the proof.
In your case you would have $g(t, X) = \int_0^t f(X_s) \,ds$ and a sufficient condition to obtain $\frac{d}{dt} \mathbb{E}(Y_t) = \mathbb{E}\bigl(f(X_t)\bigr)$ would be for $f$ to be bounded.
If you want to take the derivative only for a single point $t=t^\ast$,
boundedness of the derivative is only required in a neighbourhood of $t^\ast$. Variants of the lemma can be derived by using different convergence theorems in place of the dominated convergence theorem, e.g. by using the Vitali convergence theorem.
Based on saz' comments:
'if we integrate over the whole space, then we don't have to change the bounds of integration'
$$\int_{\mathbb R} \int_{\mathbb R} f(x,y) dx dy = \int_{\mathbb R} \int_{\mathbb R} f(x,y) dy dx$$
for applicable f. Also,
$$\int_{\Omega} \int_{\mathbb R} f(t,\omega) dt d\mathbb P = \int_{\mathbb R} \int_{\Omega} f(\omega,t) d\mathbb P dt$$
for applicable f. Now apply: $$f(t, \omega) := X_t^2(\omega)1_{[0,T]}(t)$$
A notable difference b/w the 2 Fubini's Thms is that in basic calculus, f is required to be continuous. In stochastic calculus, $f$ is not required to be continuous, but $f$ is required to be measurable.
Best Answer
Continuity is clearly not enough. Take:
$f (x) = x$;
the underlying measure space for the process as $[0,1]$ with the Lebesgue measure;
$X_s (\omega) = g(s, \omega)$ for some measurable function $g$.
Then you are asking whether
$$\int_0^1 \int_0^t g(s, \omega) \ \text{d}s \ \text{d} \omega = \int_0^t \int_0^1 g(s, \omega) \ \text{d}\omega \ \text{d} s,$$
which is the usual Fubini theorem for $g$, and thus still needs additional assumptions (positivity or integrability).
The answer is going to be boring; the equality holds if $f$ is nonnegative or integrable (in particular bounded). The later condition reads:
$$\int_0^t \mathbb{E} (|f(X_s)|) \ \text{d} s < +\infty.$$