How does $\epsilon$-$\delta$ explain the idea of ‘approach’

calculusepsilon-deltalimits

If $\lim_{x\to x_0}f(x)=L$

I Informally the idea of limit is that as $x$ approaches $x_0$, the value that $f(x)$ approaches is its limit at $x_0$.

II The $\epsilon$$\delta$ definition states that- $\forall \epsilon$, $\exists \delta$, such that:
$0<|x-x_0|<\delta\Rightarrow|f(x)-L|<\epsilon$. For any value of $\epsilon$ if I can provide a $\delta$, then $L$ is the limit of $f(x)$ at $x_0$.

In other words for any distance from $L$, that has been provided, I have to find a distance, from $x_0$, such that for all the values of $x$ closer to $x_0$, their respective $f(x)$s are closer to $L$.

In many of the calculus related videos that I watch, the informal definition (The approach definition) is used to prove theorems, and other things. Right now I'm only dealing with single variable calculus, so I can imagine for a 2D graph how values of $f(x)$ can approach $L$, as $x$ approaches $x_0$. But what about in a multi-variable setting.

So how does the $\epsilon$$\delta$ definition explain the 'approach' idea, and how is the 'approach' idea of limit always true?
I've also read a statement that $\epsilon$$\delta$ does not actually define the limit, but is instead just a way to prove a limit is true or false?

This is one statement in a proof I saw: If $\lim_{h\to 0} \frac{f(x+h)-f(x)}{h}=f'(x)$, assuming $\frac{f(x+h)-f(x)}{h} = f'(x) + \sigma(h)$, where $\sigma(h)$ is a 'junk term' that approaches $0$ as $h$ approaches $0$. The proof was based on this assumption.
Mathematically when I apply limit on both sides I get $\lim_{h \to 0} \sigma(h)=0$. But I'm still skeptical if this (assuming function equal to limit + 'junk') can be done for any case.

Best Answer

Short not quite an answer. We don't explain "approaches", we do without it, by carefully defining something equivalent we can reason with.

When you think of limits in terms of "approaching" you run into philosophical questions about time passing or numbers being infinitely close together.

Mathematicians have decided to replace vague notions of "infinitely close" by requiring "infinitely many" inequalities. That's what you do when you argue that "for every $\epsilon$ ...".

Added after the question was edited:

It's routine algebra to show that these two statements say the same thing about a function $f$ and a number $L$: $$ \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} = L $$ and $$ \lim_{h \to 0} \left( \frac{f(x+h) - f(x)}{h} - L \right) = 0 . $$ If you write $\sigma(h)$ for the expression in the large parentheses in the second equation then showing that $L$ is the derivative of $f$ at $x$ is just the same as showing $\sigma$ has limit $0$ as $h \to 0$. It's an algebraic trick that's sometimes useful.

Related Question