Let $0\leq i\leq k$ and $\alpha,\beta>0$ such that the image of $f\big|_{(-\alpha,\alpha)\times(t_{i}-\beta,t_{i}+\beta)}$ is contained in $U\subseteq M$ where $(U,\rho)$ is a coordinate neighbourhood of $f(0,t_{i})$. Then there exist continuous $\lambda_{1},\ldots,\lambda_{n}:(-\alpha,\alpha)\times(t_{i}-\beta,t_{i}+\beta)\to\mathbb{R}$ such that $$\frac{\partial f}{\partial s}(s,t)=\sum_{j=1}^{n}\lambda_{j}(s,t)\frac{\partial}{\partial x_{j}}\bigg|_{f(s,t)}$$ It follows from the definition of the variation that $\lambda_{j}\big|_{(-\alpha,\alpha)\times[t_{i},t_{i}+\beta)}$ and $\lambda_{j}\big|_{(-\alpha,\alpha)\times(t_{i}-\beta,t_{i}]}$ are differentiable. Furthermore it is a defining property of the covariant derivation that if well defined, we obtain:
$$\frac{D}{\partial s}\bigg|_{(s,t)}\frac{\partial f}{\partial s}=\sum_{j=1}^{n}\frac{\partial \lambda_{j}}{\partial s}(s,t)\frac{\partial}{\partial x_{j}}\bigg|_{f(s,t)}+\sum_{j=1}^{n}\lambda_{j}(s,t)\frac{D}{\partial s}\bigg|_{(s,t)}\frac{\partial}{\partial x_{j}}\\
=\sum_{j=1}^{n}\frac{\partial \lambda_{j}}{\partial s}(s,t)\frac{\partial}{\partial x_{j}}\bigg|_{f(s,t)}+\sum_{j,m=1}^{n}\lambda_{j}(s,t)\lambda_{m}(s,t)\left(\nabla_{\frac{\partial}{\partial x_{m}}}\frac{\partial}{\partial x_{j}}\right)_{f(s,t)}$$
So all there is to check is that $\frac{\partial \lambda_{j}}{\partial s}(s,t_{i}^{+})=\frac{\partial \lambda_{j}}{\partial s}(s,t_{i}^{-})$. But this follows from the definition of $\frac{\partial \lambda_{j}}{\partial s}(s,t_{i})$. More formally, we can define $\lambda_{j}^{1}:=\lambda_{j}\big|_{(-\alpha,\alpha)\times(t_{i}-\delta,t_{i}]}$ and $\lambda_{j}^{2}:=\lambda_{j}\big|_{(-\alpha,\alpha)\times[t_{i},t_{i}+\delta)}$. By definition we have:
$$\frac{\partial \lambda_{j}}{\partial s}(s,t_{i}^{+})=\lim_{t\downarrow t_{i}}\frac{\partial\lambda_{j}}{\partial s}(s,t)=\lim_{t\downarrow t_{i}}\frac{\partial\lambda_{j}^{2}}{\partial s}(s,t)$$
and:
$$\frac{\partial \lambda}{\partial s}(s,t_{i}^{-})=\lim_{t\uparrow t_{i}}\frac{\partial\lambda_{j}}{\partial s}(s,t)=\lim_{t\uparrow t_{i}}\frac{\partial\lambda_{j}^{1}}{\partial s}(s,t)$$
As the restrictions are continuously differentiable by assumption, we obtain $\frac{\partial \lambda_{j}}{\partial s}(s,t_{i}^{+})=\frac{\partial \lambda_{j}^{2}}{\partial s}(s,t_{i})$ and $\frac{\partial \lambda_{j}}{\partial s}(s,t_{i}^{-})=\frac{\partial \lambda_{j}^{1}}{\partial s}(s,t_{i})$. Explicit calculation now yields:
$$\frac{\partial\lambda_{j}}{\partial s}(s,t_{j}^{+})=\lim_{h\to 0}\frac{\lambda_{j}^{2}(s+h,t_{i})-\lambda_{j}^{2}(s,t_{i})}{h}=\lim_{h\to 0}\frac{\lambda_{j}(s+h,t_{i})-\lambda_{j}(s,t_{i})}{h}\\ \frac{\partial\lambda_{j}}{\partial s}(s,t_{j}^{-})=\lim_{h\to 0}\frac{\lambda_{j}^{1}(s+h,t_{i})-\lambda_{j}^{1}(s,t_{i})}{h}=\lim_{h\to 0}\frac{\lambda_{j}(s+h,t_{i})-\lambda_{j}(s,t_{i})}{h}$$
So indeed this is independent of whether we take the derivative from below $t_{i}$ or from above $t_{i}$.
As a synopsis: as expected this is nothing but the fact that at $t_{i}$ the definition of a variation leads to the smooth curve $s\mapsto f(s,t_{i})$. Hence all your favourite operations can be applied to this curve and as $t\mapsto f(s,t)$ is continuous, also the derivatives are continuous in the parameter.
P.s.:@Jason: I am sure that this just boils down to the solutions of a differential equation being continuous with respect to the initial conditions. The formal discussion just covers a special case and indeed the proof is nothing but the fact that continuity means continuity from both sides. Thank you for your patience anyway.
Best Answer
I already gave an example in the comment section showing that a curve can be smooth although it have a cusp at some point: the curve $t\in \Bbb R \mapsto (t^2,t^3)$ is such a curve.
However, one could argue that the cusp in the latter example isn't a corner, since there is no angle between the two parts of the curve meeting there. Indeed, one can still define a tangent to the curve at the cusp geometrically, or by using sufficiently high order derivatives. Hence, I would like to give an example with a proper angle, where no notion of tangent can exist.
You surely already know that the following function $$ \begin{array}{r|ccc} f\colon & \Bbb R &\longrightarrow &\Bbb R \\ & x &\longmapsto &\begin{cases} e^{-\frac{1}{x^2}} & \text{ if } x \neq 0, \\ 0 & \text{ if } x = 0, \end{cases} \end{array} $$ is smooth, with $f^{(k)}(0)=0$ for all $k\geqslant 0$. If not, take a look at this.
Consider the parametrized curve $$ \begin{array}{r|ccc} \gamma\colon & \Bbb R &\longrightarrow& \Bbb R^2 \\ & t & \longmapsto & \begin{cases} (f(t),f(t)) & \text{ if } t>0, \\ (0,0) & \text{ if } t=0,\\ (-f(t),f(t)) &\text{ if } t<0. \end{cases} \end{array} $$ It can be easily shown that $\gamma$ is a smooth curve, even at the origin. Its support is the same as that of the curve $s\in (-1,1) \mapsto (s,|s|)$, and thus has a $90°$ corner at the origin. Still, it is a smooth curve.
What is important here is that no smooth parametrization of this curve can have a non-zero derivative at the origin. In other word, this curve has no regular parametrization. It is the main reason why textbooks usually state their results beginning with "Let $\gamma$ be a regular curve".