An example for $n = 1$ from the theory of random walks. Let $f$ be a (-n everywhere) discontinuous Lebesgue measurable function on $\mathbb{R}$. Here's an example with $f$ bounded by $1$, just showing the part $x \in [-3,3]$. (Note that I have only barely subsampled the graph in this interval. If I were to fully sample it, this finite resolution representation would almost surely appear to be a solid rectangle of points of the graph. Actually produced by generating $10^6$ uniformly distributed reals in $[-1,1]$ assigned to evenly spaced abscissae, then plotting a subsample of size $10^4$.)
This function is almost surely nowhere continuous (as any open interval almost surely contains points of heights arbitrarily close to $-1$ and $1$). The integral of this function,
$$ \int_{0}^x \; f(t) \,\mathrm{d}t $$
is differentiable, but there's no hope of continuous differentiability. Graph of the integral (actually, Riemann sum approximations using $10^6$ intervals in $[-3,3]$):
Picking a different instance of a bounded by $1$ discontinuous Lebesgue measurable function on $\mathbb{R}$ and integrating it the same way, we can graph the integral.
These are almost everywhere differentiable by construction (by Lebesgue's differentiation theorem); we know the derivative is $f$. (The theorem generalizes to $n > 1$ and the integral to $\int_{[0,x_1]\times [0,x_2] \times \cdots \times [0,x_n]} \; f(t) \,\mathrm{d}t$ where we understand the intervals to be $[0,a]$ when $0 \leq a$ and $[a,0]$ when $a < 0$.) In some way, "most" functions are everywhere discontinuous messes, so "most" functions can be integrated to a differentiable, but not continuously differentiable, function.
(This construction can be iterated to get a function that is several times continuously differentiable, but whose "last" derivative is not continuous.)
Best Answer
Let $g=(g_1,g_2)$. Let $t_0 \in [0,1]$. Differentiabilty of $f\circ g$ at $t_0$ is obvious if $g(t_0) \neq (0,0)$ so assume that $g_1(t_0)=g_2(t_0)=0$. we have to show that $\frac {g_1(t-t_0)^{3}} {(t-t_0) [g_1(t-t_0)^{2}+g_2(t-t_0)^{2}]}$ has a limit as $t \to t_0$. Since $\frac {g_1(t-t_0)^{2}} {g_1(t-t_0)^{2}+g_2(t-t_0)^{2}}=\frac {g_1(t-t_0)^{2}/(t-t_0)^{2}} {g_1(t-t_0)^{2}/(t-t_0)^{2}+g_2(t-t_0)^{2}/(t-t_0)^{2}} $ which tends to a finite limit except when $g_1'(t_0)=g_2'(t_0)=0$ and $\frac {g_1(t-t_0)} {t-t_0}$ has a limit as $t \to t_0$ the proof is complete except when when $g_1'(t_0)=g_2'(t_0)=0$. When $g_1'(t_0)=g_2'(t_0)=0$ use the fact that $\frac {g_1(t-t_0)^{2}} {g_1(t-t_0)^{2}+g_2(t-t_0)^{2}}$ is bounded by $1$ to complete the proof.