Recall that the center manifold of a dynamical system at an equilibrium point is made of the orbits tangent to the center subspace at this point, where the center subspace is spanned by the eigenvectors of the linearized system at this point, corresponding to eigenvalues with real part zero.
Here one considers the equilibrium point $(0,0)$, the linearized system at this point is $$\dot x=0,\qquad \dot y=-y,$$ hence the two eigenvalues at $(0,0)$ are $0$ with eigenvector $(1,0)$, which yields the center subspace $\{(x,y)\mid y=0\}$, and $-1$ with eigenvector $(0,1)$, which yields the stable subspace $\{(x,y)\mid x=0\}$.
Regarding the center manifold $C$ at $(0,0)$, after some tedious computations and comparing coefficients of $x$ of both sides... one gets a rather different result, namely, in your notations, $$g(x)=x^2.$$ Thus, $C=\{(x,y)\mid y=x^2\}$ and the dynamics on $C$ is $$\dot x=-xg(x)=-x^3.$$ To solve question (b), change coordinates by considering $$y=g(x)\cdot z=x^2\cdot z,$$ which only excludes, as is natural, the stable manifold $S=\{(x,y)\mid x=0\}$. Then the $(x,z)$ differential system is $$\dot x=-x^3\cdot z,\qquad \dot z=-z+1,$$ in particular, $$z(t)\to1.$$ Thus, every solution of the system is attracted to the center manifold $C=\{(x,y)\mid y=g(x)\}$ in the sense that, for every initial condition $(x(0),y(0))$ not in the stable manifold $S=\{(x,y)\mid g(x)=0\}$, $$\lim_{t\to+\infty}\frac{y(t)}{g(x(t))}=1.$$
$\hspace{4cm}$
DAE vs. ODE
Almost any DAE system can be reduced to an ODE system. As this requires derivatives of the equation, the equations themselves have to be differentiable to the required order.
$\newcommand{\pd}[2]{\frac{\partial#1}{\partial#2}}$
In your example, you could, as per comment, solve the second equation for $y$ and insert into the first one. This is the same as taking the derivative of the second equation to get a differential equation for $y$,
$$
\pd{g}{t}(x,y,t)+\pd{g}{x}(x,y,t)\cdot f(x,y,t)+\pd{g}{y}(x,y,t)\cdot \dot y=0.
$$
As is visible, and also demanded by the implicit function theorem, this only works if $\pd{g}{y}$ is invertible. If that is not the case, further derivatives of the equations may give rise to a complete ODE system, the maximal number of necessary differentiations of any equation is the index of the DAE.
Consequently, any ODE system is an index-0 DAE system.
This process towards an ODE may fail, either because the equations are not smooth enough like in $x_1'=x_2,~ x_1=q$, when $q$ is not differentiable. But also the process of index determination can fail to stop, that is, there is no differentiation order at which one can extract explicit equations for the highest order derivatives. In other words, there may not be any consistent system state, consistent with all equations and their derivatives.
Usefulness of DAE
Especially physical systems can be encoded more closely to the physical description, the first principles, using DAE systems. This enables software like modelica where large systems are constructed from basic building blocks having an inner dynamic of their state and pins/variables connecting to the outside and other building blocks.
For instance, consider the pendulum as mechanical system restrained to a circle,
\begin{align}
\ddot x+\lambda x&=0
\\
\ddot y + g/m + \lambda y &= 0
\\
x^2+y^2-r^2&=0
\end{align}
or the corresponding first order system. While the algebraic equation is solvable for one of the variables, this will not give a dynamical equation for the Lagranian variable $\lambda$, one needs 2 derivatives to eliminate $\lambda$ and $3$ derivatives of the equations for an ODE for $\lambda$.
This system directly expresses the physical situation in Cartesian coordinates, containing the gravity force as gradient of the potential and the gradient of the surface with its multiplier as virtual force. While mathematically simpler, the transformation to polar coordinates as in the reduced pendulum equation loses this direct physical context.
Best Answer
So, we are given that
$\dot x = f(x, y), \tag 1$
and
$g(x, y) = 0; \tag 2$
I assume that $f(x, y)$ and $g(x, y)$ are (at least) twice continuously differentiable functions in both $x$ and $y$; that is, of class $\mathcal C^2$. We may then differentiate (2) with respect to $t$ and obtain
$g_x(x, y) \dot x + g_y(x, y) \dot y = 0, \tag 3$
where the subscripts denote derivatives:
$g_x(x, y) = \dfrac{\partial g(x, y)}{\partial x}, \tag 4$
and so forth; we can substitute (1) for $\dot x$ in (3):
$g_x(x, y) f(x, y) + g_y(x, y) \dot y = 0, \tag 5$
and since we assume
$g_y(x, y) = \dfrac{\partial g(x, y)}{\partial y} \tag 6$
is non-singular, we may invert it in (5) and write
$g_y^{-1}(x, y)g_x(x, y)f(x, y) + \dot y = 0, \tag 7$
or
$\dot y = -g_y^{-1}(x, y)g_x(x, y)f(x, y); \tag 8$
(1) and (8) together form an ordinary differential equation for the pair $(x, y)$; presumably, if we set
$x(t_0) = x_0, \tag 9$
and
$y(t_0) = y_0, \tag{10}$
then there will be an integral curve $\gamma(t) = (x(t), y(t))$ of the vector field $X(x, y)$,
$X(x, y) = \begin{pmatrix} f(x, y) \\ -g_y^{-1}(x, y)g_x(x, y)f(x, y) \end{pmatrix}, \tag{11}$
such that
$\gamma(t_0) = (x(t_0), y(t_0)) = (x_0, y_0); \tag{12}$
the existence of such a solution curve $\gamma(t)$ through any point $(x_0, y_0)$ follows from the differentiability of the vector field $X(x, y)$, and is the reason we hypothesized $f(x, y), g(x, y) \in \mathcal C^2$; for then, $\nabla g(x, y) \in \mathcal C^1$, as is
$\dot y = -g_y^{-1}(x, y)g_x(x, y) f(x,, y); \tag{13}$
it is well-known that continuous differentiability implies Lipschitz continuity, at least locally; see the answer to this question; furthermore, local Lipschitz continuity yields existence and uniqueness of a local solution through any point, by the Picard-Lindeloef theorem.
The above remarks show how the vector field $X(x, y)$ such that $\dot \gamma(t) = X(\gamma(t))$ may be constructed in the event that $g_y^{-1}(x, y)$ exists, and that integral curves of $X(x, y)$ exist and are unique in the sense that at most one satisfies a given set of initial conditions (9)-(10).
Now, as for point
(i), we note that the assumption that $g_y(x, y)$ is invertible allows invocation of the implicit function theorem to conclude that, locally, $y$ may be expressed as a differentiable function $y(x)$ of $x$ such that $g(x, y(x)) = 0$; from this we see that the graph of $y(x)$ is indeed a manifold with local coordinates given by $x$; the fact that $S = \{(x, y) \mid g(x, y) = 0 \}$ is a manifold is essential because it makes the tangent bundlle $TS$ a meaningful concept, so that sections of it can be defined; it also ensures that $\nabla g(x, y)$ may legitimately regarded as a vector normal to $S$, and this promotes our next observation that (3) may be written
$\nabla g(x, y) \cdot \dot \gamma(t) = g_x(x, y)\dot x + g_y(x, y) \dot y = 0; \tag{14}$
since $\nabla g(x, y)$ is normal to the manifold $g(x, y) = 0$, this equation tells us that $\dot \gamma(t)$ is tangent to this manifold, i.e., is locally given by a section of the tangent bundle of the manifold given by $g(x, y) = 0$; therefore we see that the vector field $X(x, y) = \dot \gamma$ may in fact be regarded as giving a differential equation on $S = \{ (x, y) \mid g(x, y) = 0\}$. We may also see directly that $X(x, y) = \dot \gamma(t)$ is locally a section of the tangent bundle to $S$ by observing that (11) implies
$\nabla g(x, y) \cdot X(x, y) = g_x(x, y) f(x, y) + g_y(x, y)(-g_y^{-1}(x, y) g_x(x, y) f(x, y)$ $= g_x(x, y) f(x, y) - g_x(x, y) f(x, y) = 0, \tag{15}$
which directly shows the vector field $X(x, y)$ is tangent to $S$, being normal to $\nabla g(x, y)$.
So the condition $\exists g_y^{-1}(x, y)$ is sufficient both to grant a manifold structure to $S$ and to define $X(x, y)$ as a section of $TS$. We can thus interpret $X$ as a differential equation on $S$ in this case.
As for
(ii), we've practically answered it already at this point; indeed, if
$\dot x = f(x), \tag{16}$
and
$g(x(t)) = c, \; \text{a constant}, \tag{17}$
then
$\dfrac{dg(x(t))}{dt} = 0; \tag{18}$
a result similar to that in (i) may be obtained if we hypothesize that $\nabla g(x) \ne 0$ in a region of interest; then the sets $S_c = \{x \mid g(x) = c \}$ will be well-defined submanifolds, and
$\nabla g(x(t)) \cdot \dot x(t) = \dfrac{dg(x(t))}{dt} = 0, \tag{19}$
which again shows that $f(x) = \dot x(t)$ is tangent to the set $S_c$ for constant $c$; thus in particular, $\dot x(t)$ is a vector field tangent to $\{x \mid g(x) = 0 \}$.