The role of machine precision in the derivative approximation

derivativesnumerical methodsreal-analysis

You wish to compute of $f^{\prime}(x)$ using the approximation
$$\widehat{f^{\prime}}(x)=\frac{f({x}+h)-{f}(x)}{h}$$ for some fixed $0<h<0.01$. Suppose that the computer already contains a built-in function for computing $f$, with relative errors less than machine precision, $t$, and you know the value of a constant $A$ satisfying $\left|f^{\prime \prime}(y)\right| \leq A$ for all $y$ with $|y-x|<0.01$.

(a) Estimate the total error incurred when computing $f^{\prime}(x)$. Give your answer in terms of $f(x), f^{\prime}(x)$, $h$ and $t$.


My attempt:

Taylors remainder theorem says $$f(x+h)=f(x)+hf'(x)+(h^2/2!)f''(\zeta),~$$for some $\zeta \in (x,x+h)$. Now $$
\widehat{f^{\prime}}(x)=\frac{f({x}+h)-{f}(x)}{h}=f'(x)+(h/2)f''(\zeta),
$$

implies $$|\widehat{f^{\prime}}(x)-f'(x)| \leq (h/2)A,$$since $|\zeta-x|<0.01$.

Doubt:

What exactly is the role of $t$ here? How the relative error in computing $f$ using the built in function comes into play?

Best Answer

You can not evaluate $f$, what you compute is $f(x)(1+\delta(x))$ with $|\delta(x)|< t$. Thus the evaluated difference quotient also gets an error term with upper bound $$\frac{2|f(x)|t}{h}.$$

If you want to go into weird details, note that the operation $x+h$ also has an error bounded by $|x|t$ (simplified, while $|h|\ll |x|$). This means that the evaluation of $f(x+h)$ has not only the error term bounded by $|f(x)|t$, but also an error term bounded by about $$|f'(x)x|t.$$


Example: $f(x)=\sin x$ at $x=1$. The second error can be compensated by making the same error in the denominator, dividing by $((x+h)-x)$. The plot shows that this really makes a (small) difference

enter image description here

Related Question