I was recently watching a tutorial on Euler's method for approximating differential equations, and the whole time I was thinking "why can't you just take the limit of the step size $h$ as it goes to $0$, and get an exact or near-exact approximation of the differential equation?" So that is my question: is it possible to do that?
Calculus – Can Euler’s Method for Differential Equations Take the Limit as $h \to 0$ for Exact Approximation
approximationcalculuslimitsnumerical methodsordinary differential equations
Related Solutions
As lhf mentioned, we need to write this as a system of first order equations and then we can use Euler's Modified Method (EMM) on the system.
We can follow this procedure to write the second order equation as a first order system. Let $w_1(x) = y(x)$, $w_2(x) = y'(x)$, giving us:
- $w'_1 = y' = w_2$
- $w'_2 = y'' = \left(\dfrac{2}{x}\right)w_2 -\left(\dfrac{2}{x^2}\right)w_1 - \dfrac{1}{x^2}$
Our initial conditions become:
$$w_1(1) = 0, w_2(1) = 1$$
Now, you can apply EMM and you can see how you step through that (only Euler method, but it will give you the approach) at The Euler Method for Higher Order Differential Equations
From the given conditions, we are only doing one step of the algorithm, since we are starting at $x=1$ and want to find the result at $x=1.2$, where $h=0.2$.
Also note, we can compare the numerical solution to the exact result, which is:
$$y(x) = \dfrac{1}{2}\left(x^2-1\right)$$
That should be enough to guide you.
Too much going on. Keep it simple to not get lost. Just look at the Taylor expansion about the midpoint. Then since one side is larger than the midpoint and one side is smaller, each with a distance of $\frac{\Delta t}{2}$, you get
$$ u(t+\Delta t) = u(t+\frac{\Delta t}{2}) + (\frac{\Delta t}{2})u'(t+ \frac{\Delta t}{2}) + (\frac{\Delta t}{2})^2 u''(t + \frac{\Delta t}{2}) + \mathcal{O}(\Delta t)^3 $$
Here I use shorthand $f(t)=f(u(t),t)$ and plugging in $u' = f$ you get
$$u(t+\Delta t) = u(t+\frac{\Delta t}{2}) + (\frac{\Delta t}{2})f(t+ \frac{\Delta t}{2}) + (\frac{\Delta t}{2})^2 f'(t + \frac{\Delta t}{2}) + \mathcal{O}(\Delta t)^3 $$
Call that equation 1. The same steps for the other side gives
$$u(t) = u(t+\frac{\Delta t}{2}) - (\frac{\Delta t}{2})f(t + \frac{\Delta t}{2}) + (\frac{\Delta t}{2})^2 f'(t + \frac{\Delta t}{2}) + \mathcal{O}(\Delta t)^3 $$
Call that equation 2. This was the only hard part. Now subtract equation 1 from equation 2 to make equation 3:
$$u(t+\Delta t) - u(t) = (\Delta t) f(t + \frac{\Delta t}{2}) + \mathcal{O}(\Delta t)^3 $$
What this tells us is that if you know the solution at the midpoint and use it to calculate the next step then you have an error of $\mathcal{O}(\Delta t)^3$ which is what we want. But how do we find out $u$ at the midpoint, i.e. $u(t+\frac{\Delta t}{2})$? Add equation 1 to equation 2 and divide by 2:
$$\frac{u(t+\Delta t) +u(t)}{2} = u(t+\frac{\Delta t}{2}) + \mathcal{O}(\Delta t)^2$$
Notice the $\mathcal{O}(\Delta t)$ term cancelled. Plugging this into $f$ as our approximation for $u$ at the midpoint, we get
$$ f(t+\frac{\Delta t}{2},u(t+\frac{\Delta t}{2})) = f(t+\frac{\Delta t}{2},\frac{u(t+\Delta t) +u(t)}{2}) + \mathcal{O}(\Delta t)^2 $$
Notice that this is an order 2 approximation to $f$ at the midpoint, but since $f$ is multiplied by $\Delta t$, we get that the error is order 3, that is by substitution into equation 3 we get:
$$u(t+\Delta t) - u(t) = (\Delta t) f(t+\frac{\Delta t}{2},\frac{u(t+\Delta t) +u(t)}{2}) + \mathcal{O}(\Delta t)^3 $$
which is the midpoint method with the error. I hope this illuminates "why" it works: the Taylor expansion at the center makes the order 1 term flip signs and cancel. This trick is used a lot in numerical methods.
Best Answer
Not really. Recall that the Euler method for $y^\prime=f(x,y)$ takes the form
$$\begin{align*}x_{k+1}&=x_k+h\\y_{k+1}&=y_k+hf(x_k,y_k)\end{align*}$$
for some stepsize $h$. Taking $h=0$ here is equivalent to not moving at all!
However, your idea can be made slightly more practical. Consider the application of the Euler method with stepsize $h/2$:
$$\begin{align*}x_{k+1/2}&=x_k+h/2\\y_{k+1/2}&=y_k+hf(x_k,y_k)/2\\x_{k+1}&=x_{k+1/2}+h/2\\y_{k+1}&=y_{k+1/2}+hf(x_{k+1/2},y_{k+1/2})/2\end{align*}$$
The value of $y_{k+1}$ from these two steps can usually be expected to be a bit more accurate that the value of $y_{k+1}$ from the $h$-step Euler method. We can keep playing the game, taking 4 steps with stepsize $h/4$, 8 steps with stepsize $h/8$, and so on, yielding a sequence of estimates for $y_{k+1}$ corresponding to decreasing $h$.
One way one might estimate the result of what happens when $h\to 0$ is to take all those estimates of $y_{k+1}$ along with the associated stepsizes, and then fit an interpolating polynomial to those. For example, taking $y_{k+1}^{(0)}$ to be the result for stepsize $h$, $y_{k+1}^{(1)}$ the corresponding result for stepsize $h/2$, and $y_{k+1}^{(2)}$ the result for stepsize $h/4$, one can fit a quadratic interpolating polynomial to the three points $\{(h,y_{k+1}^{(0)}),(h/2,y_{k+1}^{(1)}),(h/4,y_{k+1}^{(2)})\}$, and then estimate the limit as $h\to 0$ by evaluating the interpolating polynomial thus obtained at 0.
This scheme of using the interpolating polynomial to estimate the limit to 0 is called Richardson extrapolation. In practice, one certainly takes more than three points for the interpolation, and the order of the interpolating polynomial needed is estimated based on the behavior at past points. The idea of using Richardson extrapolation is due to Roland Bulirsch and Josef Stoer. The Bulirsch-Stoer method discussed here uses a slightly different method for integrating the differential equation (the modified midpoint method) from which the extrapolations are built (as well as a slightly modified extrapolation method), but is essentially the same idea as presented here.
Here's a tiny Mathematica demonstration of the Bulirsch-Stoer idea:
{-0.235759, -0.102946, -0.0485411, -0.0236106}
Here we tried the Euler method on the differential equation $y^\prime=x-y$ with initial condition $y(0)=1$ for the stepsizes $h=1/2,1/4,1/8,1/16$, and compared the result at $x=1$ with the true value $y(1)=2/e$. As you can see the accuracy isn't too good for any of these.
Here's what happens after Richardson extrapolation using those same results from Euler:
0.0000823142
We end up with a result good to three or so digits, much better than even the result of Euler corresponding to $h=1/256$. Pretty good, I would say, considering that only $2+4+8+16=30$ evaluations of $f(x,y)=x-y$ were needed for a result with three-digit accuracy, while Euler with $256$ steps (and thus $256$ evaluations of $f(x,y)$) can only manage two good digits.
As an aside, Mathematica implements Bulirsch-Stoer internally in
NDSolve[]
, as the optionMethod -> "Extrapolation"
.