First of all, yes, the loop invariant must be true before we start the loop, after each iteration, and after the loop completes.
See https://en.wikipedia.org/wiki/Loop_invariant, http://www.cs.miami.edu/home/burt/learning/Math120.1/Notes/LoopInvar.html, any of several other pages from university course notes that you'll find if you search for "loop invariant", or (presumably) your textbook.
In the most general sense, finding a loop invariant is about constructing
a mathematical proof, and there is not a simple set of instructions that will
give a good invariant whenever one exists. But if you have an inductive definition of the result of the algorithm, as you do here, your invariant might
also look like an inductive definition.
In this case you have that for all $i>1$, $F(i) = F(i-1) + F(i-2)$.
You also know the values of $F(0)$ and $F(1)$.
This suggests an algorithm in which before the first iteration of a loop,
you store the values $F(0)$ and $F(1)$ in two variables, $A$ and $B$, respectively, and you maintain the invariant that $A=F(k)$ and $B=F(k+1)$
where $k$ is the number of iterations of the loop that have completed.
You can prove that the invariant is maintained by showing that at the end of
each loop, the value in $A$ is the value that previously was in $B$,
and the value in $B$ is the previous value of $A+B$.
That is, if at the end of $k$ iterations you had $F(k)$ stored in $A$ and
$F(k+1)$ stored in $B$, at the end of the next iteration
(that is, after $k+1$ iterations) you will have $F(k+1)$ stored in $A$
and $F(k+2) = F(k+1) + F(k)$ stored in $B$.
My $A$ and $B$ look suspiciously like your r
and l
, respectively.
But note that in the algorithm I described, $B$ will store $F(n)$ after only
$n-1$ iterations, while your algorithm is written to do $n$ iterations,
and to return the larger number at the end of the $n$th iteration.
Your algorithm resolves this by doing an extra iteration at the beginning,
that is, it starts by setting l
to $F(0)$ and r
to $F(-1)$,
respectively, where $F(-1)=1$ in order that $F(1) = F(0) + F(-1)$.
So after $k$ iterations, where $k\geq 0$,
your algorithm always has $F(k-1)$ stored in r
and $F(k)$ stored in l
.
Alternatively, rather than defining $F(-1) = 1$, you can write a special
case into your loop invariant for the case when you have completed $0$ iterations. That is what your proposed invariant does, which is fine.
One of the potential difficulties of writing loop invariants is notational.
The invariant as I stated it depends on knowing how many iterations of the
loop have been completed. You have a variable i
whose value during each
loop is clearly defined, but whose values at the beginning and end of each loop are not clear to me. At the end of the first loop, does i
store the value $0$
or the value $1$? You should have a clearly defined convention for some
kind of variable associated with a loop control such as for i in [0,n)
,
with clearly-defined values of this variable before the first iteration
of the loop and at the end of each iteration of the loop.
Your notes or textbook should have established a convention for how you
construct such a variable for such a loop.
Without a clearly established convention, I'm not sure whether l
at the
end of the last iteration is $f_n$ (which it is if $i$ in your invariant has
the value $k-1$ at the end of $k$ iterations)
or $f_{n+1}$ (which is what you get if $i$ denotes the number of iterations
that have been completed).
EDIT: "How I would have done it," not necessarily how you should do it:
If I were writing this for a formal methods course (or possibly even
in an ordinary program), I probably would have started by laying out the
base cases in code like this (using =
for an equality test, not assignment):
if n=0
return 0
else if n=1
return 1
else if n>1
... do loop ...
return result
Note that the return value is undefined if $n<0$. For $n>1$,
the loop would do $n-1$ iterations of the $A,B$ algorithm I described above,
and the result would be the last value of $B$.
I might then notice that the outcomes would be the same if I wrote
if n=0
return 0
else if n>0
... do loop ...
return result
That is, one of the "if" conditions was superfluous. But I cannot get rid of
the special handling of the case $n=0$ with the loop as written,
because the loop depends on doing $n-1$ iterations to produce $F(0)$,
and you can't do $-1$ iterations of a loop.
The interval is quite simply a consequence of following the standard proof of Picard-Lindelöf. As the Lipschitz constant is globally $L=1$, one does not need a restriction in the $y$ direction.
In the next step, the Picard iteration is considered on $C([−ϵ,ϵ])$ where it has a Lipschitz constant as a mapping on a function space of $Lϵ=ϵ$,
$$
\bigl|P[y_1](t)-P[y_2](t)\bigr|=\left|\int_0^t(y_1(s)-y_2(s))ds\right|
\le|t|\,\|y_1-y_2\|\leϵ\,\|y_1-y_2\|
$$
demanding that $ϵ<1$ to be a contraction. Thus there is a solution of the ODE on the domain $[−ϵ,ϵ]$.
This sequence of solutions has a limit in the sense of domain extensions of a solution on $(-1,1)$.
Best Answer
Use the induction hypothesis for $y_n$ inside the integral in the definition of $y_{n+1}$, a change of index in the sum and finally a "convenient $0$":
\begin{align*} y_{n+1} &= 2+\int_0^x 2s(1+y_n(s))\,\mathrm ds, \\ &= 2+\int_0^x 2s\left(1-1+3\sum_{k=0}^n \dfrac{s^{2k}}{k!}\right)\,\mathrm ds , \\ &= 2+\int_0^x 6\sum_{k=0}^n \dfrac{s^{2k+1}}{k!}\,\mathrm ds , \\ &= 2+ 6\sum_{k=0}^n\int_0^x \dfrac{s^{2k+1}}{k!}\,\mathrm ds , \\ &= 2+ 6\sum_{k=0}^n \dfrac{s^{2k+2}}{k!(2k+2)}, \\ &= 2+ 3\sum_{k=0}^n \dfrac{s^{2(k+1)}}{(k+1)!}, \\ &= 2+ 3\sum_{k=1}^{n+1} \dfrac{s^{2k}}{k!}, \\ &= 2\color{red}{-3+3} +3\sum_{k=1}^{n+1} \dfrac{s^{2k}}{k!}= -1 + 3\sum_{k=0}^{n+1} \dfrac{x^{2k}}{k!}. \end{align*} Good job on your claim!