You can't find $G$ uniquely, since your equation tells you only about its behavior near points that are hit by $(x_1,x_2)$.
What we can do is compute $\frac{d}{dt}G(x_1,x_2)$ by the chain rule:
$$ \frac{d}{dt}G(x_1,x_2) = u_1(t)\frac{d}{dt}x_1(t) + u_2(t)\frac{d}{dt}x_2(t) $$
where everything on the right is known. Arbitrarily setting $G(x_1(0),x_2(0))=0$ (any constant term can be added of course) we get
$$ G(x_1(s),x_2(s)) = \int_0^s \left[u_1(t)\frac{d}{dt}x_1(t)+u_2(t)\frac{d}{dt}x_2(t) \right] \, dt $$
This gives you some values of $G$, and you'd better hope that if $(x_1(t),x_2(t))=(x_1(s),x_2(s))$ for any $t,s$, the computed values of $G$ match -- otherwise there's no solution. Similarly, you must have $u(t)=u(s)$, since otherwise you'll have conflicting demands on the partial derivatives of $G$ at that point.
Afterwards you need to choose neighboring values of $G$ such that the partial derivatives are right. This can be done if the curve described by $(x_1,x_2)$ is smooth enough, but not at all uniquely, of course. The simplest solution may be to use make $G(p)$ vary linearly on a short perpendicular to the main curve at $(x_1(t),x_2(t))$, with a slope chosen to make the partial derivatives come out right.
tl; dr: Functions.
In mathematics, a function is an abstraction of a deterministic relationship, often (but not necessarily) between two numerical quantities. Only once a relationship is fixed does a "rate of change" make sense. But in any case, the relationship governs the rate of change, not the quantities.
If you'll excuse a mildly provocative comment, the conceptual problem here stems from Leibniz notation, which hides functional relationships. Leibniz notation is incredibly useful when one computes, but when one is trying to understand calculus theoretically Leibniz notation (in my experience) loses out to Newtonian notation.
Let's consider the examples in question:
For example take $z=f(x,y)$, in this case we can 'fix' one of the variables to the value $y_0$, the question then becomes, is the derivative of $z_0=f_y(x,y)$ defined here? $y$ is now constant, however if our derivative is defined on $f$ the partial derivative should exist.
From a Newtonian perspective, we've defined a new function of one variable, say $g(x) = f(x, y_{0})$. The "derivative with respect to $y$" makes no sense.
Does it make sense to talk about the rate that $f(x,y_0)$ is changing? can we for example write $\frac{dz_0}{dx}$? or $\frac{d(f(x,y0))}{dx}$? to talk about the rate the value $f(x,y_0)$ changes? Or can we only discuss the function's partial derivatives $f_x'(x,y_0)$ and $f_y'(x,y_0)$?
Here, in the preceding notation, it's reasonable to interpret "the rate of change" as $g'$. That's a "total derivative" of $g$. It can also be interpreted as a partial derivative of $f$ with respect to its first variable, namely $D_{1}f(x, y_{0})$ in the notation of Spivak's Calculus on Manifolds.
Another thing is that if we apply $f$ to two arguments who depend on each other, we provide the total derivative, which is actually different depending on their relation, if the function is defined independently how can this be the total derivative of a function?
This is a reasonable question, but exemplifies why Leibniz notation is a source of confusion. Standard quasi-paradoxes with the multivariable chain rule are often set up in this framework. Writing $w = w(x, y, z) = w(x, y, z(x, y)) = w(x, y)$, for example, is asking for trouble on more levels than I care to count. These tangles can be reconciled by carefully defining functions, using different letters for different deterministic relationships.
If we take $\frac{df(x^2)}{dx}$, it seems we take the 'derivative' of the value $f(x^2)$ , but perhaps this notation can be seen as the 'derivative of the function whose value is $f(x^2)$', but seems a bit strange.
From a Newtonian perspective, we're introducing $g(x) = f(x^{2})$, so $g'(x) = 2xf'(x)$ is "the rate of change" by the chain rule. One might, I suppose, expect instead that "the rate of change" is $f'(x^{2})$, the derivative of $f$ evaluated at $x^{2}$, but I think that is not how most people would read $\frac{df(x^{2})}{dx}$.
Incidentally, Leibniz notation makes writing $f'(x^{2})$ inconvenient at best. As a result, at least some calculus students develop a tacit misconception that evaluation and differentiation commute. This bold assertion is based on experiences teaching the multivariable chain rule.
Best Answer
From a modern point of view, it's best to just not use dependent variables. Dependent variables have long been replaced by the concept of a function. Whenever you use dependent variables, especially in the context of analysis, you should instead model those dependent variables as functions.
Here specifically, you have a function $f_2$ of three variables. That's a function $f_2:\mathbb R^3\to \mathbb R$. Then you say that you want $u$ to equal $x^2$. Now all three variables are supposed to depend on only two variables. So here you should introduce a new function $h:\mathbb R^2\to\mathbb R^3$ which maps $(x,y)\mapsto(x^2,x,y)$. This function models how your three variables $u,x,y$ depend on $x$ and $y$. Now your function $f_1$ is simply
$$f_1=f_2\circ h.$$
Note that $f_1$ and $f_2$ are not the same function. They don't even have the same domain. So it should be no surprise that their partial derivatives are different. When taking partial derivatives of functions with dependent variables, you should be very clear which function you're taking the partial derivatives of, because they're not the same!
In a physics context, I've seen the following way of specifying the function to differentiate: $\frac{\mathrm d}{\mathrm dx}$ means to take the partial derivative of $f_1$ with respect to $x$ (though they call this a total derivative), while $\frac{\partial}{\partial x}$ means to take the partial derivative of $f_2$ with respect to $x$. The $\mathrm d$ essentially says to take all dependencies on $x$ into account, while $\partial$ says to only take "explicit" dependencies into account.