Define $f: \mathbb{R}^2 \times \mathbb{R} \to \mathbb{R}^2$ by $f(w,x) = (x^3 (w_1^3 +w_2^3 ), (x-w_1)^3 -w_2^2 -7)^T$.
Note that ${ \partial f((-1,1)^T, 1) \over \partial w} = \begin{bmatrix} 3 & 3 \\
-24 & -2 \end{bmatrix}$ is invertible, and $f((-1,1)^T, 1) = (0,0)^T$, hence there is a differentiable function $\omega: U \to V$ where $U$ a neighbourhood of $1$ and $V \subset \mathbb{R}^2$ is a neighbourhood of $(-1,1)^T$ such that $f(\omega(x),x) = 0$ for all $x \in U$.
The curve $x \mapsto \omega(x)$ is the curve you are looking for.
Let's review the two-variable Implicit Function Theorem:
Let $U\subseteq\mathbb{R}^2$ be open and $F\colon U\rightarrow\mathbb{R}$ be a continuously differentiable function. If $(x_0,y_0)\in U$ is such that $F(x_0,y_0)=0$ and $\partial_yF(x_0,y_0)\neq0$, then there exist $\varepsilon,\delta>0$ and a unique mapping $g\colon(x_0-\varepsilon,x_0+\varepsilon)\rightarrow(y_0-\delta,y_ß+\delta)$ such that $F(x,g(x))=0$ for all $x\in(x_0-\varepsilon,x_0+\varepsilon)$. Furthermore, $g$ is continuously differentiable.
The strength of this theorem is manyfold: it asserts the local existence of an implicit function, the uniqueness thereof and that it is continuously differentiable. The theorem often comes with an explicit formula for the derivative, but that part is trivial once you have differentiability. We have
$$F(x,g(x))=0.\qquad\forall x\in(x_0-\varepsilon,x_0+\varepsilon)$$
Differentiating these two functions with the chain rule yields
$$0=\begin{pmatrix}\partial_xF(x,g(x))\\\partial_yF(x,g(x))\end{pmatrix}\begin{pmatrix}1&g^{\prime}(x)\end{pmatrix}=\partial_xF(x,g(x))+g^{\prime}(x)\partial_yF(x,g(x)).\ \forall x\in(x_0-\varepsilon,x_0+\varepsilon)$$
Equivalently,
$$g^{\prime}(x)=-\frac{\partial_xF(x,g(x))}{\partial_yF(x,g(x))},\ \forall x\in(x_0-\varepsilon,x_0+\varepsilon)$$
Note that this is well-defined, because $\partial_yF(x,g(x))\neq0$ in a sufficiently small neighborhood of $x_0$, since $F$ is continuously differentiable. This answers (a) affirmatively.
However, note that, generally, the derivative $g^{\prime}(x)$ will depend on $g(x)$, as seen in the above formula. You know that $g(x_0)=y_0$, so you can explicitly calculate $g^{\prime}(x_0)$, but you can generally not calculate $g^{\prime}(x)$ any better than you can calculate $g(x)$ itself, which you usually cannot do, as otherwise you wouldn't need to apply the Implicit Function Theorem (there is a reason why these functions are called implicit after all). Since you don't have $g^{\prime}(x)$ explicitly, you also cannot easily find a primitive. Reconstructing $g$ from the given equation is akin to solving a differential equation, which, in general, can be a very hard problem. This answers (b) essentially in the negative.
For an example, take $F\colon\mathbb{R}^2\rightarrow\mathbb{R}.\,(x,y)\mapsto x+y+y^5$. For any fixed $x\in\mathbb{R}$, this increases monotonically in $y$ from $-\infty$ to $+\infty$, so there is a unique $y=g(x)$, such that $x+g(x)+g(x)^5=0$. According to Wolfram, you can only express this $g$ as an infinite series. However, you can apply the Implicit Function Theorem and deduce that $g^{\prime}(x)=-(1+5g(x)^4)^{-1}$ for all $x\in\mathbb{R}$. This tells you that $g$ is monotonically decreasing, which is useful, although you could figure this out from just looking at $F$ as well. However, reconstructing $g$ from this seems hard. I wouldn't know how to do it, you can give it a try.
Lastly, a more advanced version of the Implicit Function Theorem states that if $F$ is analytic, then $g$ will be too. In that case, you can theoretically calculate all the derivatives of $g$ at $x_0$ from the derivatives of $F$ at $(x_0,y_0)$ (by repeatedly differentiating the implicit equation) and thus expand $g$ into a Taylor series around $x_0$. This can, of course, be difficult and may not prove insightful. If you want a reference on this or the Implicit Function Theorem generally, you can check out "The Implicit Function Theorem - History, Theory, and Applications" from Krantz and Parks.
Best Answer
Remember the case of the real function of one real variable: $y=f(x)$. At a point $x_0$ where the function is differentiable, you have $f(x)=f(x_0)+f'(x_0)(x-x_0)+\text{ error term}$, where the error term is $o(x-x_0)$ when $x\to x_0$. This means that, close to $x_0$, the function is well-approximated by a linear function $f(x_0)+f'(x_0)(x-x_0)$.
For many variables, the situation is the same: if the function is differentiable at a point, it means that it can be closely approximated by a linear function. For example: let $F:X\to Y$ where $x\subseteq\mathbb R^n$ and $Y\subseteq\mathbb R^m$ ("$m$ functions of $n$ variables"), and let us write $F(x_1,\ldots,x_n)=(F_1(x_1,\ldots,x_n),\ldots,F_m(x_1,\ldots,x_n))$. If we assume that, at a point $(X_1,\ldots,X_n)\in X$ this function is differentiable, this means that:
$$F(x_1,\ldots,x_n)=F(X_1,\ldots,X_n)+dF_{(X_1,\ldots,X_n)}\left[x_1-X_1,\ldots,x_n-X_n\right]+\text{ error term}$$
where $dF$ (the differential of the function, taken at the point$ (X_1,\ldots,X_n)$) is actually a linear map, and the error term is "small" (for a suitable definition of "small") with respect to the vector $(x_1-X_1,\ldots,x_n-X_n)$. It just so happens that, in a one-function-of-one-variable, any linear map amounts to multiplying with a constant, which we call the derivative of the function, while here the derivative is more complicated but has the same nature.
Then, you learn later on the term "tangent surface": at the point $(X_1,\ldots,X_n)$, if you forget about the error term, you get a linear function that approximates well the original function near that point:
$$F(X_1,\ldots,X_n)+dF_{(X_1,\ldots,X_n)}\left[x_1-X_1,\ldots,x_n-X_n\right]$$
The image of this map (the "tangent surface") is an $n$-dimensional flat surface in $\mathbb R^m$ (hyperplane) which goes through $F(X_1,\ldots,X_n)$ just like the original function $F$, and is "close" to it in the neighbourhood of $(X_1,\ldots,X_n)$.
Also, you learn that, in the coordinates given, this map $dF$ has a matrix consisting of partial derivatives of the functions $F_1,\ldots,F_m$. Thus, any determinants of the square submatrices of that matrix are actually Jacobians.
The bigger point here is that you can use the machinery of the linear algebra to study the behaviour of the function $F$ near the chosen point. For example, when can you "invert" a linear map? You know that it depends on the rank of that linear map, and in particular, if the rank of it is $n$, then one of the $n\times n$ submatrices of the matrix of the linear map is nonzero. That immediately lets you invert the above linear map - for every choice of the other $m-n$ variables, you can solve for those $n$ variables coresponding to the (linearly independent) columns.
In effect: [1] you've replaced $F$ with its linear approximation, and [2] you know how to invert that approximation. The essence of the implicit function theorem is that then: you can then invert the original function $F$ - as long as you are close enough to the point $(X_1,\ldots,X_n)$ where you are running your analysis.