Guarantee that a function is differentiable in R2 at a certain point

continuityderivativesmultivariable-calculuspartial derivative

I just read a theorem on my class notes that states:

Let $f:\mathbb R^2\rightarrow \mathbb R$ be a function and $(x_0, y_0) \in \mathbb R²$ a point such that the partial derivates exist on a ball of center $(x_0,y_0)$ and both are continuous on $(x_0,y_0)$. This implies $f$ differentiable on $(x_0,y_0)$.

This raised some questions in me.

Consider the function $
f(x,y) = \begin{cases} 1 & \text{if $x=0$ or $y=0$} \\ 0 & \text{elsewere} \end{cases}$

I know differentiability on a point implies continuity on that point. But here I have that partial derivates exist and are continuous on a ball around $(0,0)$, and by theorem that would implied differentiability, but that would implied continuity and this function is not continuous on $(0,0)$.

Then consider the function $
g(x,y) = \begin{cases} |x| & \text{if $y=0$} \\ |y| & \text{if $x=0$} \\ 0 & \text{elsewere} \end{cases}$

This function is continuous on $(0,0)$, but partial derivates does not exist (because of the absolute value over the axis) so I can not use the above theorem. Now imagine the function $h$ results from rotating $g$ 45 degrees (so the absolute values would be on the lines of the domain $y=x$ and $y=-x$. The partial derivates of $h$ does exist, and are continuous on a ball around $(0,0)$. This implies by above theorem that $h$ is differentiable on $(0,0)$.

On the other hand, $h$ and $g$ are practically the same function. Why do I need partial derivates instead of just 2 different arbitrary directional derivates to ensure differentiability?

Now consider the function $
i(x,y) = \begin{cases} x² & \text{if $y=0$} \\ y² & \text{if $x=0$} \\ 1 & \text{elsewere} \end{cases}$

This function does have partial derivates and they are continuous around $(0,0)$, so that would implied differentiability but, again this function is not continuous on $(0,0$).

I absolutely convinced I'm reasoning something wrong (maybe a lot of things) but I can't see my own misconceptions, so I would appreciate some help. What I'm doing wrong? Or in the unlikely case that the theorem is wrong, why?

Best Answer

Your main misunderstanding relates to the existence & continuity of the partial derivatives on a ball, rather than at the point. I'll look at your first function because the other cases are similar.

Let's evaluate the derivative $\frac{\partial f}{\partial x}$ at some point just off the origin, but on the $y$-axis. From the definition of the partial derivative, we have

$$\left.\frac{\partial f}{\partial x}\right\rvert_{(0, \varepsilon)} = \lim_{h \to 0} \frac{f(h, \varepsilon) - f(0, \varepsilon)}{h} = \lim_{h\to 0} \frac{0 - 1}{h} = \lim_{h\to 0} \frac{-1}{h}$$

And this limit does not exist, no matter what non-zero value of epsilon we picked. So the partial derivatives of $f$ do not exist on the entirety of any ball around (0, 0).