As lhf mentioned, we need to write this as a system of first order equations and then we can use Euler's Modified Method (EMM) on the system.
We can follow this procedure to write the second order equation as a first order system. Let $w_1(x) = y(x)$, $w_2(x) = y'(x)$, giving us:
- $w'_1 = y' = w_2$
- $w'_2 = y'' = \left(\dfrac{2}{x}\right)w_2 -\left(\dfrac{2}{x^2}\right)w_1 - \dfrac{1}{x^2}$
Our initial conditions become:
$$w_1(1) = 0, w_2(1) = 1$$
Now, you can apply EMM and you can see how you step through that (only Euler method, but it will give you the approach) at The Euler Method for Higher Order Differential Equations
From the given conditions, we are only doing one step of the algorithm, since we are starting at $x=1$ and want to find the result at $x=1.2$, where $h=0.2$.
Also note, we can compare the numerical solution to the exact result, which is:
$$y(x) = \dfrac{1}{2}\left(x^2-1\right)$$
That should be enough to guide you.
The truncation error does not satisfy that equation, it's just its definition.
Consider two following problems:
- The first is an ODE.
$$
y'(t) = f(t, y(t))\\
y(0) = a.
$$
Its solution is some smooth function $y(t)$.
- The second is a difference equation
$$
\frac{z_{i+1} - z_i}{h} = f(t_i, z_i)\\
z_0 = a.
$$
Its solution is some discrete function $z_i$.
I've intentionally used different letters to denote those two solutions. They are quite different, the former is a smooth function while the latter is a discrete one. One needs to be careful even to compare those two. Usually the third function is introduced. It is defined as a restriction of the smooth $y(t)$ to the grid $t_i$, where the discrete function $z_i$ is defined. Let's denote the restriction as $w_i$:
$$
w_i \equiv y(t_i).
$$
The function $w_i$ is discrete just like $z_i$ and $w_i$ coincide with $y(t)$ at grid points. Since now $w_i$ and $z_i$ are functions of the same class we can easily compare them:
$$
e_i = w_i - z_i \equiv y(t_i) - z_i.
$$
So, roughly speaking, the global error shows how close are $y(t)$ and $z_i$ (by restricting the former to the grid). When someone is solving some problem numerically the global error is what he is interesting in. Anyway, direct computation of global error is almost impossible, since we often simply do not have the exact values of $w_i = y(t_i)$ (
in contradistinction to $z_i$, which we can compute easily).
And the local truncation error concept comes to the rescue. Note that previously we've compared the solutions. Now we're going to compare problems. Take $z_i$. It is the solution to the second problem. Plugging $z_i$ into it makes it a valid identity
$$
\frac{z_{i+1} - z_i}{h} = f(t_i, z_i)\\
z_0 = a.
$$
But if we now take $w_i$ and try to plug it into the difference scheme we wont get an identity. Instead we'll get a residual:
$$
\frac{w_{i+1} - w_i}{h} = f(t_i, w_i) \color{red}{{}+ d_i}\\
w_0 = a \color{red}{{} + d_0}.
$$
If we are very lucky, some residuals may vanish, like $d_0$, but often it is not the case.
So why is $d_i$ interesting while it also is defined in terms of $w_i$ (the unknown solution to the original problem)? It turns out that we can estimate the $d_i$ without knowing the exact values of $w_i$ by just knowing the original problem.
$$
d_i = \frac{w_{i+1} - w_i}{h} - f(t_i, w_i) \equiv
\frac{y(t_{i+1}) - y(t_i)}{h} - f(t_i, y(t_i)) = \\ =
y'(t_i) + h \frac{y''(t_i)}{2} + O(h^2) - f(t_i, y(t_i)) = \\ =
\color{blue}{\left[y'(t_i) - f(t_i, y(t_i))\right]} + \color{red}{h \frac{y''(t_i)}{2} + O(h^2)}
$$
The blue term in braces is exactly the original ODE, and $y(t)$ is exactly its solution. So the term is equal to zero.
$$
d_i = h \frac{y''(t_i)}{2} + O(h^2).
$$
Similar result may be obtained if using different form of Taylor's formula:
$$
d_i = h \frac{y''(\xi_i)}{2}, \qquad \xi_i \in [t_{i}, t_{i+1}].
$$
So now we can estimate the local truncation error, but we're interested in the global error.
To relate them we need to introduce another concept of stability. Consider the two discrete problems
$$
\begin{aligned}
&\frac{z_{i+1} - z_i}{h} = f(t_i, z_i)\\
&z_0 = a
\end{aligned}
\qquad\text{and}\qquad
\begin{aligned}
&\frac{w_{i+1} - w_i}{h} = f(t_i, w_i) \color{green}{{} + d_i}\\
&w_0 = a \color{green}{{} + d_0}
\end{aligned}.
$$
Pretend that we know $d_i$. Let's view the second problem as a perturbation of the first one. That's reasonable, since $d_i$ is a small value of $O(h)$ magnitude. A difference problem is called stable if such small perturbations result in small changes of the solution. For this case this means that the difference $z_i - w_i$ will also be small. Precisely
$$
\max_i |z_i - w_i| \leq C \max_i |d_i|
$$
where $C$ is called the stability constant of the method. For the explicit Euler method it can be shown that for Lipschitz-continuous $f$
$$
C \leq e^{LT}
$$
with $L$ being the Lipschitz constant of $f$ and $T$ is the total integration time $T = \max_i t_i$.
Finally we can relate the global error and the local truncation error by
$$
|e_i| \leq C \max_i |d_i|
$$
If the local truncation error tends to zero when the discrete mesh is refined the numerical method is called consistent. The Lax theorem states that a stable consistent method converges, in sense that $e_i \to 0$ when the mesh is refined.
Best Answer
I would write the assumptions about the error a little more explicitly, then you have less problems explaining what you do. That the error order is $a$ can be written explicitly as $$ y_h(t)=y_E(t)+C(t)h^a+\text{higher degree terms} $$ In the order estimate at $t=1$ the higher degree terms are disregarded, and 2 values of $h$ are used to eliminate $C(1)$ and compute $a$, 2 equations for 2 unknowns. Then you get exactly the computation you have done, $$ \frac{y_h(1)-y_E(1)}{y_{h/2}(1)-y_E(1)}\approx 2^a $$ from where you get the estimated value of the order $a$.
I get for a larger collection of step sizes the values
If you do the same for the explicit midpoint method (=improved Euler), then conversely the numerical values are better for the larger step sizes, but the order estimate converges slower towards $2$.