In my numerical analysis class we are studying numerical solutions to ODEs and we are looking at the equation $y'=-y$ and we want to test the explicit midpoint rule $y_{n+1}=y_{n-1}+2hf(y_n)$ where $f$ is the RHS of $y'=-y$. The initial condition is $y_0=1$ and we take stepsize $h=0.2$ and we want to apply the scheme on the interval $[0,20]$, and the index is interpreted as $y_n \approx y(t = nh)$. And we are asked to take $ y_1 = e^{-h} $. I tried coding the scheme in MATLAB but got the graph of something that oscillates after a certain point and does not look like the true solution at all, but in the smaller interval it does match $e^{-x}$. I have attached the figures below. Can someone please explain why this is happening? I thank all helpers.
Specific numerical scheme does not converge
numerical methodsnumerical-calculusordinary differential equations
Related Solutions
As lhf mentioned, we need to write this as a system of first order equations and then we can use Euler's Modified Method (EMM) on the system.
We can follow this procedure to write the second order equation as a first order system. Let $w_1(x) = y(x)$, $w_2(x) = y'(x)$, giving us:
- $w'_1 = y' = w_2$
- $w'_2 = y'' = \left(\dfrac{2}{x}\right)w_2 -\left(\dfrac{2}{x^2}\right)w_1 - \dfrac{1}{x^2}$
Our initial conditions become:
$$w_1(1) = 0, w_2(1) = 1$$
Now, you can apply EMM and you can see how you step through that (only Euler method, but it will give you the approach) at The Euler Method for Higher Order Differential Equations
From the given conditions, we are only doing one step of the algorithm, since we are starting at $x=1$ and want to find the result at $x=1.2$, where $h=0.2$.
Also note, we can compare the numerical solution to the exact result, which is:
$$y(x) = \dfrac{1}{2}\left(x^2-1\right)$$
That should be enough to guide you.
There is a theorem which states the following.
Let $u:I⊂ℝ→ℝ$ be given by \begin{align*}u^{(n)} &= F(t,u,u',…,u^{(n-1)})\\ u^{(k)}(t_0)&=u_k^0, \qquad k=0,…,n-1. \end{align*} This is called an explicit, scalar initial value problem (IVP) of order $n$. Then $u$ is also given as solution to the system of first order: \begin{align*} u'_{n-1}(t)&=F(t,u_0(t),u_1(t),…,u_{n-1}(t)), & u_{n-1}(t_0)&=u_{n-1}^0 \\ u'_{n-2}(t)&=u_{n-1}(t), & u_{n-2}(t_0)&=u_{n-2}^0 \\ \vdots& \\ u'_{0}(t)&=u_{1}(t),& u_{0}(t_0)&=u_{0}^0 \end{align*} using the notation $u_0(t)=u(t)$ and $u_k=\frac{d^k}{dt^k}u$.
What this theorem states, is that you can rewrite a IVP of higher order to a system of ODEs of order one. Written as a matrix system, if $F$ is linear, you will get
$$u' = Au + b.$$
Note, that your ODE is in fact a linear ODE.
The reason why this theorem is important, is that all theories that are developed for order one IVPs, $u'=f(t,u)$, can be applied to such systems of order one. Especially you can use all numerical solvers like the Explicit Euler or RK-methods.
In your case (using your first result $(*)$) that equation is: $$\begin{bmatrix} u_1' \\ u_2'\\u_3' \end{bmatrix} = \begin{bmatrix}0 & 1 & 0\\ 0& 0 & 1 \\ -1/x & -x &-1 \end{bmatrix} \begin{bmatrix}u_1 \\ u_2 \\ u_3 \end{bmatrix} +\begin{bmatrix}0\\0\\x\end{bmatrix}$$
Note, that you have to also include the equation like $u_1' = u_2$ in that system. Also note, that you can't have any $y$'s in there anymore.
That system could be solved by any ODE solver, e.g. Explicit Euler: $$u^{(t+1)} = u^{(t)} + h(Au^{(t)}+b).$$
Is it clear how to set up your iteration from here?
Now for your other case $(**)$. You will get the following system, modulo my calculation errors: $$\begin{bmatrix}1 & 0 & 0\\ 0& 1 & 0 \\ x^2 & x & x \end{bmatrix}\begin{bmatrix} u_1' \\ u_2'\\u_3' \end{bmatrix} = \begin{bmatrix}0 & 1 & 0\\ 0& 0 & 1 \\ -1 & 0 &0 \end{bmatrix} \begin{bmatrix}u_1 \\ u_2 \\ u_3 \end{bmatrix} +\begin{bmatrix}0\\0\\1\end{bmatrix}$$
So this is something like $$Mu'=\tilde{A}u+\tilde{b}.$$ If you would multiply by $M^{-1}$, you get the same system as above.
For numerical methods, the second system has the disadvantage, that you can't directly apply most ODE-Solver to that system. So I don't see any reason to write it in this form.
But we can just try to create an Explicit Euler Method for these kind of matrix systems:
Replace $$u_i' \approx \frac{u_i(t+h) - u(t)}{h} = \frac{u_i^{(t+1)} - u^{t}}{h} $$ yields \begin{align*} u_1^{(t+1)} &= u_1^t + hF_1 \\ u_2^{(t+1)} &= u_2^t + hF_2 \\ x^2u_1^{(t+1)}+xu_2^{(t+1)}+xu_3^{(t+1)} &= x^2u_1^{(t)}+xu_2^{(t)}+xu_3^{(t)} + hF_3 \end{align*} And that translates to: $$Mu^{(t+1)}=Mu^{(t)}+h(\tilde{A}u^{(t)}+\tilde{b}).$$
Again, if you would mutlipy by $M^{-1}$ you would get the Explicit Euler Method for the system $u'=Au+b$. The reason why this works, is that we assumed $F$ to be a linear function, and the difference quotient is also a linear operator in $f$.
Best Answer
On the name of the method
The method is not what is generally known as the explicit midpoint method. For $y'=f(y)$ that would be the one-step method $$ y_{n+1}=y_n+hf\Bigl(y_n+\tfrac12hf(y_n)\Bigr). $$ The method described in the question is a multi-step method, while it is also a generalization of the midpoint quadrature method, it is better known as central Euler or two-step Nyström method.
General shape of the leading error terms
Using the definitions and notations of Deriving the central Euler method and intuition, the leading order error terms can be written as $$ y_k-y(x_k)=h^2(a_k(x_k)+(-1)^kb(x_k)) $$ where the coefficient functions are solutions of the differential equations \begin{align} a'(x)&=f'(y(x))a(x)-\frac16y'''(x), & 0&=a(x_0)+b(x_0),\\ b'(x)&=-f'(y(x))b(x), & \frac{y_1-y(x_1)}{h^2}&=a(x_1)-b(x_1). \end{align}
Applied to the given example
For the test case $$ y'=f(y)=-y,~~ y(0)=1 \implies y(x)=e^{-x}, ~~y'''=-e^{-x}=-y $$ this results in \begin{align} a'(x)&=-a(x)+\frac16e^{-x}&\implies a(x)&=a(0)e^{-x}+\frac16xe^{-x}\\ b'(x)&=b(x)&\implies b(x)&=b(0)e^{x}. \end{align} As per the task $y_1=y(x_1)=e^{-h}$, so that the coefficients $a(0),b(0)$ are obtained by solving the linear system \begin{align} 0&=a(0)+b(0)\\ 0&=a(0)e^{-h}+\frac16he^{-h}-b(0)e^{h}\\[1em] \hline a(0)=-b(0)&=-\frac{he^{-h}}{12\cosh(h)} \end{align} This means that the oscillating part of the error grows like $\frac{h^3e^{-h}}{12\cosh(h)}e^x$, which for $h=0.2$ has at $x=20$ the numerical value $259603.7064$, which is in the magnitude range of your plot. More precisely, $$ y_k=e^{-x_k}+\frac{h^2}6x_ke^{-x_k}+\frac{h^3}{12\cosh(h)}(e^{-x_k}-(-1)^{k}e^{+x_k})+O(h^4). $$
Plotting the error coefficients
The error evolution on the initial segment $x\in [0,4]$ looks like the first plot in the figure below. The colored graphs are the errors $y_k-y(x_k)$ divided by $h^2$ for different sizes of $h$, while the thin lines are the curves $a(x)\pm b(x)$ that the error are expected to lie on or close-by. This they apparently do with only small deviations resulting from higher order error terms.
A completely different, direct but specialized approach
You could also directly solve the linear recursion $$ y_{n+1}=y_{n-1}-2hy_n $$ with its characteristic polynomial $$ 0=q^2+2hq-1=(q+h)^2-(1+h^2)\implies q=-h\pm\sqrt{1+h^2} $$ so that $$ y_n=(1+c)(\sqrt{1+h^2}-h)^n-c(-1)^n(\sqrt{1+h^2}+h)^n $$ $y_1=e^{-h}$ then gives $$ e^{-h}=\sqrt{1+h^2}-h+2c\sqrt{1+h^2} \\~\\\implies c=\frac{1-h+\frac12h^2-\frac16h^3+\frac1{24}h^4+O(h^5)+h-1-\frac12h^2+\frac18h^4+O(h^6)}{2\sqrt{1+h^2}} \\=-\frac{h^3(1-h)}{12}+O(h^5) $$ The first term goes to zero, the second term for $h=0.2$ and $n=100$ gives the value $$ \frac{h^3(1-h)}{12}(\sqrt{1+h^2}+h)^{n}\approx 2.2\cdot10^5 $$