Taylor Expansion – Derivation of Multivariable Taylor Series

taylor expansion

I am having trouble grokking why it is, assuming that the function is analytic everywhere (and many other assumptions that I am, no doubt, naively assuming), that this is true:

$f(x,y)=f(x_0,y_0)+[f'_x(x_0,y_0)(x-x_0)+f'_y(x_0,y_0)(y-y_0)]+\frac{1}{2!}[f''_{xx}(x_0,y_0)(x-x_0)+2f''_{yx}(x_0,y_0)(x-x_0)(y-y_0)+f''_{yy}(x_0,y_0)(y-y_0)^2]+…$

I am familiar with the one-variabled Taylor series, and intuitively feel why the 'linear' multivariable terms should be as they are.

In short, I ask for a proof of this equality. If possible, it would be nice to have an answer free of unnecessary compaction of notation (such as table of partial derivatives).

As a auxiliary question, I see a direct analogy with the first 2 terms $f(x,y)=f(x_0,y_0)+[f'_x(x_0,y_0)(x-x_0)+f'_y(x_0,y_0)(y-y_0)]$ and the total differential $f(x,y)-f(x_0,y_0)=\Delta f(x,y)=f'_x(x_0,y_0)\Delta x+f'_y(x_0,y_0)\Delta y$.

When $\Delta x $ and $\Delta y $ are not infinitesimally small, can I use the third term in the Taylor multivariable series to get closer to the real total differential?

Best Answer

Let $\phi(\boldsymbol{r})$ be a scalar field, and $\boldsymbol{a} \cdot \nabla \phi$ gives the directional derivative of $\phi$ in the direction of $a$. That is,

$$\boldsymbol{a} \cdot \nabla \phi(\boldsymbol{r}) = \lim_{t\to 0} \frac{\phi(\boldsymbol{r} + \boldsymbol{a} t) - \phi(\boldsymbol{r})}{t}$$

Now let's consider $\Phi(t) = \phi(\boldsymbol{r}_0 + \boldsymbol{a}t)$ for some finite $t$. Now, let's expand this in powers of $t$. This is a one-dimensional Taylor series.

$$\Phi(t) = \Phi(0) + \Phi'(0)t + \frac{1}{2!} \Phi''(0) t^2 + \ldots$$

To substitute back in $\Phi(t) = \phi(\boldsymbol{r}_0+\boldsymbol{a}t)$, we must compute derivatives of $\Phi$ in terms of $\phi$. Again, we resort to the basic definition of the derivative.

$$\Phi'(0) = \lim_{t\to 0} \frac{\phi(\boldsymbol{r}_0+\boldsymbol{a}t) - \phi(\boldsymbol{r}_0)}{t} = \boldsymbol{a} \cdot \nabla \phi(\boldsymbol{r})\Big|_{\boldsymbol{r}=\boldsymbol{r}_0}$$

And similarly for higher derivatives. This enables us to write,

$$\phi(\boldsymbol{r}_0+\boldsymbol{a}t) = \phi(\boldsymbol{r}_0) + [\boldsymbol{a} \cdot \nabla \phi(\boldsymbol{r})] \Big|_{\boldsymbol{r}=\boldsymbol{r}_0} t + \frac{1}{2!} [\boldsymbol{a} \cdot \nabla][\boldsymbol{a} \cdot \nabla]\phi(\boldsymbol{r}) \Big|_{\boldsymbol{r}=\boldsymbol{r}_0} t^2 + \ldots$$

It is not difficult to show that this form reproduces the form of the original question. Take $t=1$ and let $\boldsymbol{a} = (x-x_0, y-y_0)$ and $\boldsymbol{r}_0 = (x_0, y_0)$. Thus, we have built multivariate Taylor series from the well-established case of a single variable, just by use of the directional derivative.