[Math] Error estimator for forward finite difference

finite differencesnumerical methods

I'm studying numerical analysis and for the approximation of a derivative around a given point we have for the forward finite difference:

$$(\delta_+f)(\bar x) = \frac{f(\bar x + h) – f(\bar x)}{h}$$

Now to estimate the error, for what I read, we develop the Taylor series as follows:

$$f(\bar x + h) = f(\bar x) + hf'(\bar x) + \frac{h^2}{2}f''(\xi)$$

What I can't understand is:

  1. Why do we make the expansion of the Taylor series about $\bar x$ ?
  2. Why do we say that $(\delta_+f)(\bar x)$ is a first order of approximation? (if the right hand side is a second derivative?)
  3. Why in the second derivative $\bar x$ changes to $\xi$

I'm not searching for a very complex answer, I just want a very simple explanation (with an explicit example if possible).

Best Answer

  1. Because the difference between the two samples of $f$ result from the introduction of the small value of $h$.

  2. It is 1st order because the approximation involves the first derivative of $f$. The term in the 2nd derivative represents an error estimate. In more rigorous formulations, the estimate would be expressed in $O$ notation, such as $O[h^2]$.

  3. $\xi$ is a number in the domain of $f$ that bounds the error term. You do not need to know this explicitly.

As an example, consider $f(x) = e^x$. Then

$$ (\delta_+ f)(\bar{x}) = e^{\bar{x}} \frac{e^h - 1}{h} = e^{\bar{x}} + h e^{\bar{x}} + h^2 e^{\xi} $$

where $ e^{\xi} $ covers the error in the approximation. That is, you could write the rest of the Taylor approximation to $\frac{e^h - 1}{h}$ and set it to $e^{\xi}$.

Related Question