We are given the system:
$\tag 1 x'' = −x; x(0) = 1, x'(0) = 0.$
To linearize $(1)$, we let $x_1 = x$ and $x_2 = x'$, yielding:
$\tag 2 x'_1 = x' = x_2 ~~~ \text{and} ~~ x'_2 = x'' = -x_1.$
This gives us the matrix: $A = \begin{bmatrix} 0 & 1 \\ -1 & 0 \\ \end{bmatrix}.$
Solving for the eigenvalues and eigenvectors of $A$, yields:
$\lambda_1 = -i, v_1 = (i, 1)$ and $\lambda_2 = i, v_2 = (-i, 1).$
Using the eigenvalues, eigenvectors and initial conditions, we arrive at:
$$\tag 3 x_1(t) = - \cos t ~~~~ \text{and} ~~~~ x_2(t) = \sin t.$$
You should verify that this satisfies the system in $(2)$ and derive it!
Note: Variations of solutions with varying signs are okay as long as they satisfy the linearized system (so, we can have $\cos t$ and $- \sin t$) as solutions too, but the Picard iteration will change accordingly.
We will want to compare the Taylor series for the solution above, so that would be:
$$\tag 4 \sin t = t- \frac{t^3}{6} + \frac{t^5}{120} + O(t^7) \\
\cos t = 1 - \frac{t^2}{2} + \frac{t^4}{24} - \frac{t^6}{720} + O(t^7)$$
Now, we want to form the Picard iterates on the linearized, first order system.
The Picard-Lindelof Iteration is given by:
$$\tag 5 \displaystyle x_0(t) = x_0, ~~x_{n+1}(t) = x_0 + \int^t_{t_0} f(s, x_n(s))ds$$
Of course, we are going to do this using both $x_1(t)$ and $x_2(t)$ as called for by the linearization that we did earlier (two solutions to consider).
The setup uses $x(0) = (x_1(0),x_2(0))^T = (-1, 0)^T$ and then does the Picard iterates using $(5)$, so we have:
$X_{0}= \begin{bmatrix} -1 \\0 \\ \end{bmatrix}$ and $A = \begin{bmatrix} 0 & 1 \\ -1 & 0 \\ \end{bmatrix}$, so
First Iteration:
$\displaystyle \begin{bmatrix} -1 \\0 \\ \end{bmatrix} + \int^{t}_{0} \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} -1 \\0 \\ \end{bmatrix} ds = \begin{bmatrix} -1 \\ t \\ \end{bmatrix}$
Second Iteration:
$\displaystyle \begin{bmatrix} -1 \\0 \\ \end{bmatrix} + \int^{t}_{0} \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} -1 \\ s \\ \end{bmatrix} ds = \begin{bmatrix} -1 + \frac{t^2}{2} \\ t \end{bmatrix}$
Third Iteration:
$\displaystyle \begin{bmatrix} -1 \\0 \\ \end{bmatrix} + \int^{t}_{0} \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} -1 +\frac{s^2}{2} \\ s \end{bmatrix} ds = \begin{bmatrix} -1 +\frac{t^2}{2} \\ t -\frac{t^3}{6} \end{bmatrix}$
Fourth Iteration:
$\displaystyle \begin{bmatrix} -1 \\0 \\ \end{bmatrix} + \int^{t}_{0} \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} -1 + \frac{s^2}{2} \\ s -\frac{s^3}{6} \end{bmatrix} ds = \begin{bmatrix} -1 +\frac{t^2}{2} - \frac{t^4}{24} \\ t - \frac{t^3}{6} \end{bmatrix}$
Notice what this is converging to? Compare these to the series expansions we wrote above in $(4)$. The top term in your $- \cos t$ the bottom is $\sin t$, as expected.
Although we see that this is converging to the $x_1(t) = -\cos t$ and $x_2(t) = \sin t$ results, you should really prove this using an inductive argument.
For an inductive argument, you would write the vector we are deriving as a function of $n$ and then use induction that it is true and gives the $x_1(t)$ and $x_2(t)$.
Best Answer
I assume you are trying to solve $$ x'=F(x)\qquad x(0)=x_0. $$
Existence: Let $C$ be a Lipschitz constant for $F$. The idea is to define recursively a sequence of of continuous functions on $I:=[-1/2C,1/2C]$ by $$ x_0(t):=x_0\qquad x_{n+1}(t):=x_0+\int_0^tF(x_n(s))ds. $$ Clearly, this defines continuous functions (and therefore $C^1$ on $\mathbb{R}$). But when we restrict to $I$, we have $$ |x_{n+1}(t)-x_n(t)|\leq\left|\int_0^t|F(x_n(s))-F(x_{n-1}(s))|ds\right|\leq C|t|\sup_I|x_{n}-x_{n-1}|. $$ Therefore $$ \|x_{n+1}-x_n\|_\infty\leq \frac{1}{2}\|x_n-x_{n-1}\|_\infty $$ with the sup norm over $I$.
It follows that $(x_n)$ is Cauchy in the Banach space $C^0(I)$. So it converges to a continuous function $x$ on $I$ which, by uniqueness of limit in a metric space and continuity of the recurrence formula, satisfies $$ x(t)=x_0+\int_0^tF(x(s))ds \qquad\forall t\in I. $$ Clearly, $x(0)=x_0$ so the initial condition is fulfilled. Now by the fundamental theorem of calculus, the rhs is differentiable and we get $$ x'(t)=F(x(t))\qquad\forall t\in\left(-\frac{1}{2C},\frac{1}{2C}\right). $$ So you have a local solution.
Uniqueness: Any two solutions on this interval, satisfy the fixed point condition $$ x(t)=x_0+\int_0^t(F(x(s))ds. $$ So the difference of two such solutions has $\|x_1-x_2\|_\infty\leq\frac{1}{2}\|x_1-x_2\|_\infty$. Hence $x_1=x_2$.
Maximal extension: Consider the set of all extensions of the unique solution we have just found, and which are still solutions of the ode on the extended interval where they are defined. This is naturall ordered, partially, by inclusion of the intervals of definition. The key remark is that if $x_1$ and $x_2$ are two extensions on $I_1$ and $I_2$ respectively, then they coincide on $I_1\cap I_2$. This can be shown by a standard connedtedness argument (open closed non empty imlies all), with the help of the local uniqueness we have just shown, applied similarly to a different initial condition. If $(x_\alpha,I_\alpha)$ denotes the set of all extensions, then $\bigcup_\alpha I_\alpha$ is an interval (remmber all the $I_\alpha$ contain $I$ above, this suffices to get that) and we can define $x(t):=x_\alpha(t)$ on $I_\alpha$ without ambiguity, thanks to the key remark above. This is clearly a maximal extension. And since it extends every extension, there is no other maximal extension.
Conclusion: there exists a unique maximal solution.
Generalization: The whole thing works the same for $x'=F(t,x)$ when $F$ is locally Lipschitz in the second variable. This covers way more situations. That's why I gave the argument in a way which can be applied verbatim to this more general case.
Domain of the maximal extension: The big difference is here. In the locally Lipschitz case, the interval where the maximal solution is defined is not necessarily the whole interval where $F$ is defined. In the Lipschitz case like yours, and in particular in the linear case, the domain of the maximal extension is $I$ if $F$ is continuous on $I\times \mathbb{R}$, and Lipchitz in the second variable. Indeed, in this case, we have a uniform control on the extensions, so whenever a solution is defined on an interval stricly contained in $I$, it can be extended. So the maximal solution must be defined on $I$. In our case, it is $\mathbb{R}$.