[Math] How does big-O notation relate to the actual error involved in a numerical differentiation

asymptoticserror-propagationnumerical methods

Suppose I have some position data ${x_1, x_2, … x_n}$ that was sampled at an interval $h$. If I wanted the velocity data, I could apply a finite difference scheme:

$ v_1 = \frac{x_2 – x_1}{h} + O(h)$

$O(h)$ denotes that the error term is proportional to the step size. ..What exactly does this mean physically? For instance, say my $x$ terms were taken at 5 samples/second (so $h=0.2$). Does this mean the error in my velocity data is $\pm$0.2? How do I interpret the error in a physical sense, and can I write this like an uncertainty to a measurement? E.g., $v_1 = 10 \pm 0.2$ m/s?

Best Answer

Look at the simplest functions $x(t)$ and compute the exact expressions $v(h,t) = \frac{x(t+h)-x(t)}{h}$.

For $x(t) = at$ you have $v(h,t) = \frac{x(t+h)-x(t)}{h} = a$ and therefore the error term $O(h)$ is zero.

For $x(t) = at^2$ you have $v(h,t) = 2 at+ah$ and $O(h)=ah.\;$ Thus the error is constant in time, it only depends on $a,h.$

For $x(t) = at^3$ you have $v(h,t) = 3 a t^2 + 3 a t h + a h^2$ and $O(h)=3 a t h + a h^2.\;$ Once a again you have a constant term, here $a h^2,\;$ but now the second error term $3 a t h$ is time-dependent ant its absolute value increases.

In the two first cases you have an obvious error of the form $\pm x.y $ m/s but in the last case it is not so easy. You can give a maximum error for the considered t range; or if you have to bound the error you have maximum time interval for a valid approximation.