Examples Summary (I will keep updating):
Note that all the process below is centered (zero mean), as what I mentioned in the Edit, the lemma only holds for zero mean process, and I don't think there is a way to loose such condition.
Also, please note that we are only talking about $1-$dimensional indices, I did not develop the lemma for $n-$dimensional indices. Thus, it is also hard for me to talk about things like Brownian Sheet or something like that.
$(1)$ Standard Brownian Motion: $B(s,t)=s\wedge t$, so $$B(t,t)+B(s,s)-2B(s,t)=t+s-2(s\wedge t)=|t-s|,$$ so the desired inequality holds for $C=1$ and $r=1$, and we all know that the Standard Brownian Motion has a continuous modification.
$(2)$ Standard Ornstein-Uhlenbeck Process: $B(s,t)=e^{-|t-s|}$, then $$B(t,t)+B(s,s)-2B(s,t)=2-2e^{-|t-s|}.$$ Note that if $|t-s|\geq 1$, then $e^{-|t-s|}\geq 0$, and thus $$2-2e^{-|t-s|}\leq 2\leq 2|t-s|.$$ If $|t-s|\leq 1$, then $e^{-|t-s|}\geq 1-|t-s|,$ so $$2-2e^{-|t-s|}\leq 2-2(1-|t-s|)=2|t-s|.$$
Therefore, the desired inequality holds for $C=2$ and $r=1$.
$(3)$ The Brownian Bridge: $B(s,t)=s\wedge t-st$, then We have $$B(t,t)+B(s,s)-2B(s,
t)=t-t^{2}+s-s^{2}-2(s\wedge t-st),$$ if $t\leq s$, then $$RHS=-t-t^{2}+s-s^{2}+2st=(s-t)-(t^{2}-2st+s^{2})=(s-t)-(t-s)^{2}\leq s-t,$$ if $t\geq s$, then $$RHS=t-t^{2}+s-s^{2}-2s+2st=(t-s)-(t^{2}-2st+s^{2})=(t-s)-(t-s)^{2}\leq t-s.$$
Hence, $$B(t,t)+B(s,s)-2B(s,t)\leq |s-t|.$$
Thus, the inequality is always satisfied with $C=1$ and $r=1$.
Indeed, $m$ and $\gamma$ are functions. More precisely, $m\colon\mathbb T\to\mathbb R$ is defined for $t\in\mathbb T$ as $m(t)=\mathbb E\left[X_t\right]$ and $\gamma\colon \mathbb T\times \mathbb T\to\mathbb R$ by
$\gamma\left(t,t'\right)=\operatorname{Cov}\left(X_t,X_{t'}\right)$.
When $\mathbb T$ is finite, $m$ is simply denoted as a vector and $\gamma$ as a matrix.
Best Answer
The key observation is that if the result holds for such a function $f$ then $\operatorname{Var}(X_t) = \operatorname{Var}(W(f(t)) = f(t)$. So our only choice is to define $f(t) = \operatorname{Var}(X_t)$.
Notice that for $t>s$, $$f(t) - f(s) = \mathbb{E}[X_t^2 - X_s^2] = \mathbb{E}[(X_t - X_s)^2 + 2X_s (X_t - X_s)] = \mathbb{E}[(X_t - X_s)^2] \geq 0$$ where the final equality holds by independence of increments so that $f$ is a deterministic, non-decreasing function.
Now fix times $0 \leq t_1 < \dots < t_n$. We want to check that $$(X_{t_1}, \dots, X_{t_n}) \stackrel{d}{=} (W(f(t_1)), \dots, W(f(t_n)))$$ Since these are both Gaussian vectors, it suffices to check that for $1 \leq i<j \leq n$ we have that $$\operatorname{Cov}(X_{t_i},X_{t_j}) = \operatorname{Cov}(W(f(t_i)),W(f(t_j))) = f(t_i) = \operatorname{Var}[X_{t_i}]$$ where the second to last equality holds since $f$ is increasing. This is again just a computation using independence of increments.
$$\mathbb{E}[X_{t_i}X_{t_j}] = \mathbb{E}[X_{t_i}(X_{t_j} - X_{t_i}) + X_{t_i}^2] = \operatorname{Var}(X_{t_i})$$