Different versions of mean value theorem in several variables

multivariable-calculus

According to Calculus: A complete course by Adams & Essex, a version of the mean value theorem in several variables is given by

If $f_1(a,b)$ and $f_2(a,b)$ [notation for partial derivatives] are continuous in a neighbourhood of the point $(a,b)$, and if the absolute values of $h$ and $k$ are sufficiently small, then there exist numbers $\theta_1$ and $\theta_2$, each between $0$ and $1$ such that $$f(a+h,b+k)-f(a,b)=hf_1(a+\theta_1 h,b+k)+kf_2(a,b+\theta_2 k)$$

Another version in the same book is given by

If $f(x,y)$ has first partial derivatives continuous near every point of the straight line segment joining the points $(a,b)$ and $(a+h,b+k)$, then there exist a number $\theta$ between $0$ and $1$ such that $$f(a+h,b+k)=f(a,b)+hf_1(a+\theta h,b+\theta k)+kf_2(a+\theta h,b+\theta k)$$

What is the difference between these two versions? It seems like the conditions for the function $f$ are similiar.

Bonus question: Why can't the latter version be used to prove that continuous first partial derivatives of a function at a point imply that the function is differentiable at that point?

Best Answer

HINT: You should think about two different paths. One path is the line segment from $(a,b)$ to $(a+h,b+k)$. The other path is the two-segment path, going first in the $y$-direction from $(a,b)$ to $(a,b+k)$ and then in the $x$-direction from $(a,b+k)$ to $(a+h,b+k)$.