[Math] Error propagation in numerical analysis.

numerical methods

my professor is using the following slides to teach error propagation in numerical analysis:

enter image description here

I am finding hard times understanding the material because of its extreme formal notation and brevity.

Can anyone explain this discussion? I am mainly concerned with the slide $2$-$3$.

Best Answer

The first slide 2-3 is much less complicated than it looks. The context is that there is a numerical calculation that takes place in some high dimensional space. The starting point is denoted by d and the ending point by w. In more detail, the process has $K$ steps numbered from $1$ to $K$. Each step involves some error which is propagated from the previous point to the next point. Thus, the process involves the original error and at each step that error is scaled and some more error is added. The processs is assumed to be well behaved enough that linearization of error propagation makes sense. That is the meaning of the last equation on the slide. The $\, \delta[\tilde{v}^{(k-1)}] \,$ is the relative error coming into step $\, k \,$ where it is linearly transformed by the matrix T$^{(k)}$ and the step adds some error $\eta^{(k)}$ of its own.

The second slide 2-4 is even simpler. It describes the one step and one dimensional case of the previous slide. That is, given a function $\, y = \phi(x) \,$ where the $\, \Delta y \,$ and $\, \Delta x \,$ are the absolute errors of $\, y \,$ and $\, x \,$ as related by $\, y + \Delta y = \phi(x + \Delta x). \,$ The linearization process uses a truncated Taylor series for $\, \phi \,$ with the resulting approximation $\, \Delta y = x \phi'(x)\varepsilon + O(\varepsilon^2) \,$ to first order. The last equation $\, T = \frac{x}{y} \frac{dy}{dx} \,$ is just expressing the ratio of the relative error of $\, y \,$ to the relative error of $\, x. \,$

Related Question