I assume you are trying to solve
$$
x'=F(x)\qquad x(0)=x_0.
$$
Existence: Let $C$ be a Lipschitz constant for $F$.
The idea is to define recursively a sequence of of continuous functions on $I:=[-1/2C,1/2C]$ by
$$
x_0(t):=x_0\qquad x_{n+1}(t):=x_0+\int_0^tF(x_n(s))ds.
$$
Clearly, this defines continuous functions (and therefore $C^1$ on $\mathbb{R}$). But when we restrict to $I$, we have
$$
|x_{n+1}(t)-x_n(t)|\leq\left|\int_0^t|F(x_n(s))-F(x_{n-1}(s))|ds\right|\leq C|t|\sup_I|x_{n}-x_{n-1}|.
$$
Therefore
$$
\|x_{n+1}-x_n\|_\infty\leq \frac{1}{2}\|x_n-x_{n-1}\|_\infty
$$
with the sup norm over $I$.
It follows that $(x_n)$ is Cauchy in the Banach space $C^0(I)$. So it converges to a continuous function $x$ on $I$ which, by uniqueness of limit in a metric space and continuity of the recurrence formula, satisfies
$$
x(t)=x_0+\int_0^tF(x(s))ds \qquad\forall t\in I.
$$
Clearly, $x(0)=x_0$ so the initial condition is fulfilled. Now by the fundamental theorem of calculus, the rhs is differentiable and we get
$$
x'(t)=F(x(t))\qquad\forall t\in\left(-\frac{1}{2C},\frac{1}{2C}\right).
$$
So you have a local solution.
Uniqueness: Any two solutions on this interval, satisfy the fixed point condition
$$
x(t)=x_0+\int_0^t(F(x(s))ds.
$$
So the difference of two such solutions has $\|x_1-x_2\|_\infty\leq\frac{1}{2}\|x_1-x_2\|_\infty$. Hence $x_1=x_2$.
Maximal extension:
Consider the set of all extensions of the unique solution we have just found, and which are still solutions of the ode on the extended interval where they are defined. This is naturall ordered, partially, by inclusion of the intervals of definition. The key remark is that if $x_1$ and $x_2$ are two extensions on $I_1$ and $I_2$ respectively, then they coincide on $I_1\cap I_2$. This can be shown by a standard connedtedness argument (open closed non empty imlies all), with the help of the local uniqueness we have just shown, applied similarly to a different initial condition. If $(x_\alpha,I_\alpha)$ denotes the set of all extensions, then $\bigcup_\alpha I_\alpha$ is an interval (remmber all the $I_\alpha$ contain $I$ above, this suffices to get that) and we can define $x(t):=x_\alpha(t)$ on $I_\alpha$ without ambiguity, thanks to the key remark above. This is clearly a maximal extension. And since it extends every extension, there is no other maximal extension.
Conclusion: there exists a unique maximal solution.
Generalization: The whole thing works the same for $x'=F(t,x)$ when $F$ is locally Lipschitz in the second variable. This covers way more situations. That's why I gave the argument in a way which can be applied verbatim to this more general case.
Domain of the maximal extension: The big difference is here. In the locally Lipschitz case, the interval where the maximal solution is defined is not necessarily the whole interval where $F$ is defined. In the Lipschitz case like yours, and in particular in the linear case, the domain of the maximal extension is $I$ if $F$ is continuous on $I\times \mathbb{R}$, and Lipchitz in the second variable. Indeed, in this case, we have a uniform control on the extensions, so whenever a solution is defined on an interval stricly contained in $I$, it can be extended. So the maximal solution must be defined on $I$. In our case, it is $\mathbb{R}$.
We have $3y^2-6y\ne0$. Since $y(0)=1$, we know that $0<y<2$.
Put $y=0$ in $y^3-3y^2 -x-x^3=-2$, and we know $x=1$. Similarly, when $y=2$, we have $x=-1$.
Hence, $-1<x<1$ is the interval we want. However, it is not easy to solve for y. (See Wolfram Alpha.)
Best Answer
The following is an exploration of what is close to the maximally achievable. The interval approximation that is found is not exactly proven. In the task, you are probably expected to find a much smaller interval, a box $(t,y) =[t_0-a,t_0+a]\times[y_0-b,y_0+b]$ where the direct Picard iteration is contracting in the unmodified supremum norm.
To compute a numerical approximation for the inner interval, consider the vector field $(1, e^{-5t}+y^3)$ and rescale it to grow at most linearly in $y$ by dividing by $(1+y^2)$. Then solve for instance on $[-10,10]$
resulting in the interval bounds $[ -1.024183468244184,\, 1.548175031612699]$. Increasing the integration interval will lead to an increasing negative bound, while in the positive direction the curve grows in $y$, but stays almost stationary in $t$, that is, there is a pole.
One could now speculate that on the small positive segment, the ODE is bounded below by $y'=y^3\implies y(t)^{-2}=y_0^{-2}-2(t-t_0)$, giving a pole for the lower bound and thus an upper bound for the domain of the solution at $t_0+\frac1{2\cdot 0.4^2}=3.125$. This is not a very good bound, but it shows that the domain on that side is bounded at all. Taking the last computed point $(t_2,y_2)$ as initial value, one can get a better estimate for the pole by using this approximation as $t^*=t_2+\frac1{2y_2^2}= 1.54962150$.
On the left side the transient $y=-e^{-5t/3}$ is also an attracting asymptote, so that the solution will eventually follow it and exist on the full half-axis.