When $f:\>\Omega\to{\mathbb R}^m$ is a $C^1$-map defined in a convex region $\Omega\subset{\mathbb R}^n$ then for any two points $a$, $b\in\Omega$ one has
$$|f(b)-f(a)|\leq M|b-a|\ ,\tag{1}$$
where $$M:=\sup_{x\in[a,b]}\|df(x)\|\tag{2}$$
and $[a,b]$ denotes the segment connecting $a$ with $b$. The inequality $(1)$ is a vectorial form of the mean value theorem of differential calculus. The norms $\|df(x)\|$ appearing in $(2)$ can be estimated in terms of the matrix elements of $df(x)$ as follows:
$$\|df(x)\|\leq\sqrt{\sum_{i, k}\left|{\partial f_i\over\partial x_k}(x)\right|^2}\ .$$
From this one can conclude that an $f\in C^1(\Omega)$ is locally Lipschitz on $\Omega$ and Lipschitz on any compact subset $K\subset\Omega$.
The $f$ in your example has a partial derivative which is unbounded when $\Omega={\mathbb R}^2$; therefore it is to be expected that there is no globally valid Lipschitz constant. In fact
$${|f_2(0,x_2+h)-f_2(0,x_2)|\over h}=2x_2+h\qquad(h>0)$$
gets arbitrarily large as $x_2\to\infty$. (The variable $t$ does not enter the game.)
Consider the differential equation of the first order IVP
$y'=f(x,y)$ with $y(x_0) = y_0$ . . . . .$(1)$
Existence theorem: Suppose that $f(x, y)$ is continuous function in some region $$R=\{(x,y):|x-x_0|\le a,|y-y_0|\le b\}, (a,b\gt0)$$
Since $f$ is continuous in a closed and bounded domain, it is necessarily bounded in $R$,
i.e., there exists $k > 0$ such that $|f(x, y)| ≤ k$, $∀(x, y) ∈ R$. Then the IVP $(1)$ has at least one solution $y = y(x)$ defined in the interval $|x − x_0| ≤ α$ where $\alpha=min\{a,\frac{b}{k}\}$.
${}$
Uniquness theorem: Suppose that $f$ and $\frac{∂f}{∂y}$ are continuous function
in $R$ (defined in the existence theorem). Hence, both the $f$ and $\frac{∂f}{∂y}$ are bounded in $R$, i.e.,
$1.$ $|f(x, y)| ≤ K$, $∀(x, y) ∈ R$
$2.$ $|\frac{∂f}{∂y}|≤ M$, $∀(x, y) ∈ R$
Then the IVP $(1)$ has at most one solution $y = y(x)$ defined in the interval $|x − x_0| ≤ α$ where $\alpha=min\{a,\frac{b}{k}\}$.
Combining with existence thereom, the IVP $(1)$ has unique solution $y = y(x)$ defined
in the interval $|x − x_0| ≤ α$.
${}$
Note: Condition $|\frac{∂f}{∂y}|≤ M$, $∀(x, y) ∈ R$ can be replaced by a weaker condition which is known as Lipschitz condition. Thus, instead of continuity of $\frac{∂f}{∂y}$, we require
$|f(x, y_1) − f(x, y_2)| ≤ L|y_1 − y_2|$ $ ∀(x, y_i) ∈ R$.
If $\frac{∂f}{∂y}$ exists and is bounded, then it necessarily satisfies Lipschitz condition. On the other hand, a function $f(x, y)$ may be Lipschitz continuous but $\frac{∂f}{∂y}$ may not exists.
For example $f(x, y) = x^2|y|$, $|x| ≤ 1$, $|y| ≤ 1$ is Lipschitz continuous in $y$ but $\frac{∂f}{∂y}$ does not exist at $(x, 0)$.
${}$
Consider the second order initial value problem
$y''(t)+p(t)y'(t)+q(t)y=g(t)$, with $y(t_0) = y_0, y′(t_0) = y^′_0.$ . . . . . $(2)$
If the functions $p(t)$, $q(t)$, and $g(t)$ are continuous on the interval $I: α < t < β$
containing the point $t = t_0$. Then there exists a unique solution $y = φ(t)$ of the
problem $(2)$, and that this solution exists throughout the interval $I$.
That is, the theorem guarantees that the given initial value problem $(2)$ will
always have (existence of) exactly one (uniqueness) twice-differentiable
solution, on any interval containing $t_0$ as long as all three functions $p(t), q(t),$
and $g(t)$ are continuous on the same interval. Conversely, neither existence
nor uniqueness of a solution is guaranteed at a discontinuity of $p(t), q(t),$ or
$g(t)$.
${}$
For more information and examples you can find the following :
$1.$ 'Differential Equations Theory, Technique and Practice' by G. F. Simmons & S. G. Krantz (McGraw Hill Higher Education)
$2.$ "Differential Equations" by Shepley L. Ross
$3.$ http://home.iitk.ac.in/~sghorai/TEACHING/MTH203/ode5.pdf
$4.$ http://www.math.ucsd.edu/~y1zhao/2013UCSDFallQuarterMath20D/Files/Section3.2.pdf
$5.$ http://www.ltcconline.net/greenl/courses/204/ConstantCoeff/uniquenessExistence.htm
Best Answer
Because boundedness is enough. But if you have continuity of $y$-derivative on a closed rectangle, that implies boundedness of $y$-derivative, and therefore the Lipschitz property of the function with respect to $y$.
Be careful with the Picard theorem, because it comes in different versions. The solution (ex 1.6) (provided by Tony Piccolo) uses a different version from what you stated in the question. As you stated the theorem, it does not apply to very long intervals.
The crux of the matter is to choose $a$ and $b$ that satisfy the conditions of the theorem. We don't get these numbers from any formula: we have to choose them so that the inequalities $aM<1$ and $aK\le b$ work. Since $M$ and $K$ may themselves depend on $a,b$, we may end up chasing our own tail.
But here, the derivative of $\sin (xy)$ with respect to $y$ is bounded by $|x|$, which is independent of $y$. We can take $M$ to be the supremum of $|x|$ on the interval $[x_0-a,x_0+a]$, which is $|x_0|+a$. Also, the function $\sin (xy)$ itself is bounded by $1$ everywhere, so $K=1$.
So, choose $a$ so that $a(|x_0|+a)<1$. Instead of solving the quadratic equation, you can take $a=\frac12 \min(|x_0|^{-1},1)$. Then you apply the theorem repeatedly to expand the interval of existence... it's possible to cover the real line in this way.
But I'll have to say that this is a really laborious way to deal with this ODE. The proper way is to use the better form of the theorem, which does not require $aM<1$. See "optimization of the solution's interval" here.