[Math] Why does the Lagrange remainder work for multivariate functions

multivariable-calculustaylor expansion

I am familiar with the proof of the Lagrange remainder for single-variable functions (see Theorem $4$), but why does this concept carry over to multivariate functions?

If $\ f: \mathbb R^k\to \mathbb R$ is $n+1$ times differentiable, then there exists a point $\mathbf c$, where $c_i$ is between $a_i$ and $x_i$, such that $$R_n(\mathbf x,\mathbf a)=\sum_{|\alpha|=n+1}\frac {D^\alpha f(\mathbf c)}{\alpha!}(\mathbf x-\mathbf a)^\alpha$$


My attempt:

From Wikipedia,
$$R_k(\mathbf x,\mathbf a)=\sum_{|\alpha|=k+1}\left(\begin{matrix} k+1 \\ \alpha\end{matrix} \right)\frac{(\mathbf x-\mathbf a)^\alpha }{k!}
\int_0^1 (1-t)^k (D^\alpha f)(\mathbf a+t(\mathbf x-\mathbf a))\,dt\tag1$$

and using Theorem $2$ we get (?)
$$\begin{align}
R_k(\mathbf x,\mathbf a)&=\sum_{|\alpha|=k+1}\left(\begin{matrix} k+1 \\ \alpha\end{matrix} \right)\frac{(\mathbf x-\mathbf a)^\alpha }{(k+1)!}
(D^\alpha f)(\mathbf c) \tag2\\
&=\sum_{|\alpha|=k+1}\frac {D^\alpha f(\mathbf c)}{\alpha!}(\mathbf x-\mathbf a)^\alpha
\end{align}$$

However, I don't think that it is possible to go from $(1)$ to $(2)$ because, when taking $(D^\alpha f)(\mathbf a+t(\mathbf x-\mathbf a))$ out of the integral, the value that $t\in(0,1)$ takes may change for each summand.

Best Answer

It would be easier to apply Theorem $2$ earlier, i.e. set $g(t) = f(\mathbf a + t(\mathbf x - \mathbf a))$ so that $$\begin{align} f(\mathbf{x})&=\sum_{j=0}^k\frac{1}{j!}g^{(j)}(0)\,+\int_0^1 \frac{(1-t)^k }{k!} g^{(k+1)}(t)\, dt \\ &= \sum_{j=0}^k\frac{1}{j!}g^{(j)}(0) \, + \frac{g^{(k+1)}(\lambda)}{(k+1)!} \qquad (\lambda\in [0,1]) \\ \end{align}$$ hence the Lagrange remainder is $$R_k(\mathbf x,\mathbf a)=\sum_{|\alpha|=k+1}\frac {D^\alpha f(\mathbf c)}{\alpha!}(\mathbf x-\mathbf a)^\alpha$$ We also find that $\mathbf c$ is on the line connecting $\mathbf x$ and $\mathbf a$.