I'm studying numerical analysis and for the approximation of a derivative around a given point we have for the forward finite difference:
$$(\delta_+f)(\bar x) = \frac{f(\bar x + h) – f(\bar x)}{h}$$
Now to estimate the error, for what I read, we develop the Taylor series as follows:
$$f(\bar x + h) = f(\bar x) + hf'(\bar x) + \frac{h^2}{2}f''(\xi)$$
What I can't understand is:
- Why do we make the expansion of the Taylor series about $\bar x$ ?
- Why do we say that $(\delta_+f)(\bar x)$ is a first order of approximation? (if the right hand side is a second derivative?)
- Why in the second derivative $\bar x$ changes to $\xi$
I'm not searching for a very complex answer, I just want a very simple explanation (with an explicit example if possible).
Best Answer
Because the difference between the two samples of $f$ result from the introduction of the small value of $h$.
It is 1st order because the approximation involves the first derivative of $f$. The term in the 2nd derivative represents an error estimate. In more rigorous formulations, the estimate would be expressed in $O$ notation, such as $O[h^2]$.
$\xi$ is a number in the domain of $f$ that bounds the error term. You do not need to know this explicitly.
As an example, consider $f(x) = e^x$. Then
$$ (\delta_+ f)(\bar{x}) = e^{\bar{x}} \frac{e^h - 1}{h} = e^{\bar{x}} + h e^{\bar{x}} + h^2 e^{\xi} $$
where $ e^{\xi} $ covers the error in the approximation. That is, you could write the rest of the Taylor approximation to $\frac{e^h - 1}{h}$ and set it to $e^{\xi}$.