Define $F: \Bbb{R}^3 \to \Bbb{R}^2$ by $F(t,x,y) = (x^2y + xy^2 + t^2, x^2 + y^2-2yt)$. We're going to apply the implicit function theorem to this function $F$. Now, notice that $F(1,-1,1) = (1,0)$. Also, the Jacobian matrix of partial derivatives is
\begin{align}
F'(t,x,y) &=
\begin{pmatrix}
2t & 2xy + y^2 & x^2 + 2xy \\
-2y & 2x & 2y - 2t
\end{pmatrix}
\end{align}
So,
\begin{align}
F'(1,-1,1) &=
\begin{pmatrix}
2 & -1 & -1 \\
-2 & -2 & 0
\end{pmatrix}
\end{align}
Note that every $2 \times 2$ submatrix is invertible. So, by the implicit function theorem, you can solve for any two out of the three variables as a $C^{\infty}$ function of the third. In particular, you can solve for $x$ and $y$ as a function of $t$, in a neighbourhood of $t=1$, which is what you had to prove.
You're probably more comfortable with the lower-dimensional situation where $f: \Bbb{R}^2 \to \Bbb{R}$, and if there is a point $(a,b)$ where $\dfrac{\partial f}{\partial y}(a,b) \neq 0$, then in a neighbourhood of $(a,b)$, you can express $y$ as a smooth function of $x$. Well, this question is just a generalization to more variables.
Since you want to solve for $x,y$ as a function of $t$, you simply have to check if
\begin{align}
\det \begin{pmatrix} \frac{\partial F_1}{\partial x} & \frac{\partial F_1}{\partial y} \\ \frac{\partial F_2}{\partial x} & \frac{\partial F_2}{\partial y}\end{pmatrix} \neq 0
\end{align}
(all the derivatives being evaluated at the appropriate point). And as I've shown above, this is indeed the case. For the statement of the theorem in the general case, take a look at the Wikipedia page.
Best Answer
My copy of Rudin, Principles of Mathematical Analysis, says:
(Here $A_x$ denotes the map $A$ restricted to the first, $\mathbb{R}^n$, factor.)
The hypotheses you seek are in the first paragraph.
Thinking about it, based on your description in the question, I'm not quite sure that you understand the implicit function theorem. Instead of saying what you cannot do, it says what you can do. That is, the IFT is what gives you permission in the first place to write some of the variables as functions of the others.
The way I like to think about the IFT is this. You have the zero set of a function $F:\mathbb{R}^N\to\mathbb{R}^n$. You want to parametrize it as the graph of a function around a point $p$. You write $\mathbb{R}^N$ into a product of two spaces, $\mathbb{R}^n\times\mathbb{R}^m$, so that $p = (a,b)$. The goal is to find a function $g:\mathbb{R}^m\to\mathbb{R}^n$ so that the graph of $g$ around $a$ is $b$.
For it to be the graph of a function, the zero set has to pass the vertical line test. Infinitesimally, that means that when you move in the $\mathbb{R}^n$ direction at $(a,b)$, you can't be tangent to the level curve. So you check this by verifying that the first factor of the differential $A=F'$ at $(a,b)$ is invertible.
Now you're good to go. The level set passes the vertical line test at $(a,b)$ and so the magic of differential calculus says that you can come up with the function $g$. Unfortunately, it is too much to ask for $g$ to be defined everywhere in $\mathbb{R}^n$, so all you get is an open set $W$ as the domain of $g$, and the graph of the function $g$ lies inside the open set $U$.