Towards the weakest requirements, provided that $f_x(x,y)$ exists, for every $(x,y)$, and $f_x(x,\cdot)$ is Riemann integrable over $[a,b]$.
Observe that
$$
\frac{1}{h}\bigg(\int_a^b f(x+h,y)\,dy-\int_a^b f(x,y)\,dy\,\bigg)-\int_a^b f_x(x,y)\,dy\\ =\frac{1}{h}\int_a^b\int_x^{x+h} \big(\,f_x(t,y)-f_x(x,y)\big)\,dt\,dy,
$$
and hence
$$
\bigg|\,\frac{1}{h}\bigg(\int_a^b f(x+h,y)\,dy-\int_a^b f(x,y)\,dy\,\bigg)-\int_a^b f_x(x,y)\,dy\,\bigg| \\ \le \frac{1}{h}\int_a^b\int_x^{x+h} \big|\,f_x(t,y)-f_x(x,y)\big|\,dt\,dy \\=
\frac{1}{h}\int_x^{x+h} \int_a^b \big|\,f_x(t,y)-f_x(x,y)\big|\,dt\,dy=
\frac{1}{h}\int_x^{x+h}\|\,f_x(t,\cdot)-f_x(x,\cdot)\|_{L^1[a,b]}\,dt
$$
where $\,\|g\|_{L^1[a,b]}=\int_a^b |g(x)|\,dx$.
So, if
$$
\lim_{h\to 0}\frac{1}{h}\int_x^{x+h}\|\,f_x(t,\cdot)-f_x(x,\cdot)\|_{L^1[a,b]}\,dt=0, \tag{1}
$$
then
$$
\frac{d}{dx}\int_a^b f(x,y)\,dy=\int_a^b f_x(x,y)\,dy.
$$
Note that, Condition $(1)$ is significantly weaker than the condition
$$
\lim_{h\to 0}\|\,f_x(x+h,\cdot)-f_x(x,\cdot)\|_{L^1[a,b]}=0,
$$
which in turn, is significantly weaker than the uniform continuity of $f_x$.
First let's look at the version of Leibniz rule, that you referenced (from this post). The formula is on the form
$$
\frac{\mathrm{d}}{\mathrm{d}t}\int_{-\infty}^{\infty} f(x,t) \mathrm{d}x = \int_{-\infty}^{\infty}\frac{\partial}{\partial t}f(x,t) \mathrm{d}x.
$$
Notice in particular that the variable $t$ that we are differentiating with respect to is not the same as the variable $x$, which we are integrating with respect to. So in order to formulate a version of the formula for expected values we would need our random variable to depend on the variable $t$ in some way. I will therefore write $Z_t$ instead of $Z(x)$.
Version 1
Since $Z_t$ now depends on $t$ we would expect that the density $f_{Z_t}$ also depends on $t$ so we will write $f_{Z_t}(x) = f(x,t)$. Assuming that this function is differentiable and that Leibniz rule applies, we get
$$ \frac{d}{dt}(E[Z_t]) = \frac{d}{dt} \int_0^\infty x f(x,t) \: dx = \int_0^\infty x \frac{\partial}{\partial t}f(x,t) \: dx.$$
Example 1
Suppose that $Z_t$ is an exponentially distributed random variable with rate paratemeter $\lambda = t$. Then $f(x,t) = te^{-tx}$, so
\begin{align*}\frac{d}{dt}(E[Z_t]) &= \int_0^\infty x\frac{\partial}{\partial t}(te^{-tx}) \: dx \\
&=\int_0^\infty xe^{-tx} - x^2te^{-tx} \: dx \\
&= - \frac{1}{t^2}
\end{align*}
This can easily be verified since $E[Z_t] = \frac{1}{t}$.
Version 2
A common situation is when $Z_t$ depends on another random variable $X$ through a relation $Z_t = g(X,t)$ for a function $g$. Again assuming that $g$ is differentiable and that the Leibniz formula applies we may write
$$\frac{d}{dt}(E[g(X,t)]) = \frac{d}{dt} \int_{-\infty}^\infty g(x,t) f_X(x) \: dx = \int_{-\infty}^\infty \frac{\partial}{\partial t} (g(x,t)) f_X(x) \: dx,$$
which can be written more compactly as
$$\frac{d}{dt}(E[g(X,t)]) = E[\frac{\partial}{\partial t} g(X,t)]$$
Example 2
In particular assume $Z_t = e^{tX}$ and define $M_X(t) = E[Z_t]$, then
$$M_X'(t) = E[ \frac{\partial}{\partial t} e^{tX}] = E[Xe^{tX}]$$
and in particular $M_X'(0) = E[X]$. $M_X$ is called the moment generating function of $X$.
Best Answer
TL;DR, if the partial derivative $\frac{\partial f}{\partial t}$ is jointly continuous in the variables $x$ and $t$, then the Leibniz rule works. If you use the Lebesgue integral (which gives you the dominated convergence theorem), this condition can be relaxed.
Leibniz rule for Riemann integration
When working with Riemann integrals, the standard criterion for switching a limit and an integral sign is the following statement (this is, in fact, a special case of the dominated convergence theorem), which relies on uniform convergence:
Using this result, we can establish a Leibniz rule for Riemann integration. Because notation with multiple variables can get confusing, let us define $F : \mathbb{R} \to \mathbb{R}$ be the function $$ F(t) = \int_a^b f(x, t)\mathrm{d}x, $$ where $f : [a, b] \times \mathbb{R}$ is the function in your question. For a fixed $t_0 \in \mathbb{R}$, we would like to find if $F'(t_0)$ exists and whether it can be obtained by the Leibniz rule. The key observation is that we can write differentiation as the limit $$ \tag{1} F'(t_0) = \lim_{h \to 0} \frac{F(t_0 + h) - F(t_0)}{h} = \lim_{h \to 0} \int_a^b \frac{f(x, t_0 + h) - f(x, t_0)}{h} \mathrm{d}x. $$ To apply Theorem 1, we would like the difference quotient to converge uniformly. (That is to say, for every sequence $h_n \to 0$, the difference quotient $\frac{f(x, t_0 + h_n) - f(x, t_0)}{h_n}$ should converge uniformly in $x$.) However, the difference quotient is a bit unwieldy to work with, but we can use the mean-value theorem to instead write $$ \tag{2} F'(t_0) = \lim_{h \to 0} \int_a^b \frac{\partial f}{\partial t}(x, t_0 + h_x) \mathrm{d}x, $$ where $|h_x| \leq |h|$ for all $x \in [a, b]$. This leads to the following result.
By $(2)$, it suffices to show that for all sequences $h_n \to 0$, the functions $$g_n(x) := \frac{\partial f}{\partial t}(x, t_0 + (h_n)_x)$$ converge uniformly to $g(x) := \frac{\partial f}{\partial t}(x)$. This can be done by utilizing the uniform continuity of $\frac{\partial f}{\partial t}$. I'll leave the rest of the proof to you.
Also, (if I'm understanding correctly) your criterion of $f(x, t + 1 / n)$ converging uniformly to $f(x, t)$ doesn't exactly work. For one, it says nothing about the uniform convergence of the difference quotient in $(1)$ since it only works with discrete time steps of $1 / n$. Even if $f(x, t + h)$ were to converge uniformly to $f(x, t)$ as $h \to 0$, it would not guarantee uniform convergence of the difference quotient (it does not even guarantee the existence of a derivative!).
Leibniz rule for Lebesgue integration
Finally, here's a criterion for the Leibniz rule if we are using the Lebesgue integral.
Observe that Theorem 3 supersedes Theorem 2 because continuous functions are bounded on compact subsets. Setting $X = [t_0 - \delta, t_0 + \delta]$ and setting $g : X \to \mathbb{R}$ to be a constant bound of $\frac{\partial f}{\partial t}$ on $X$, Theorem 2 follows.
The proof of Theorem 3 is arguably easier than in the case of Riemann integration, at least if one is equipped with the machinery of measure theory. After obtaining $(2)$, the result follows directly from the dominated convergence theorem.