(Why) Do Functions have to be Twice Differentiable to use linear approximation

calculusderivativeslinear approximation

According to Wiki on Linear Approximation

Given a twice continuously differentiable function $f$ of one real variable, Taylor's theorem for the case $n = 1$ states that

$$f(x) = f(a) + f'(a)(x – a) + R_2$$

I'm not really dealing with Taylor polynomials yet, more learning calc and linear approximation currently. I'm wondering whether the condition for being twice continuously differentiable is required to apply linear approximation, and if so, why.

Best Answer

To understand what it means. Take any function $f:E\rightarrow \mathbb{R}$ and $x_0\in E$. Then consider $$R(x) = f(x)-f(x_0)-f'(x_0)(x-x_0)$$ You can always do that (and rearrange to get your expression), the thing that matters is exactly how that function $R(x)$ behaves when ${x\rightarrow x_0}$ (is it a good linear approximation or is it completely useless?)

For example, if $f$ is a differentiable function it means that. $$\lim_{h\rightarrow 0} \lVert R_h\rVert = \lim_{h\rightarrow 0} \lVert\frac{f(x_0+h)-f(x_0)}{h} - f'(x_0)\rVert = 0$$ So you already have that $f(x) = f(x_0) + f'(x_0) h +R_h$ and now you know that if f is differentiable it is a good approximation, how good? Well $\lim_{h\rightarrow 0} R_h = 0$. Perhaps you can do even better..

The extra differentiability conditions you mentioned give you more information with regards as to how the difference $R_h$ behaves (what aitor has commented)

Related Question