Indeed both $\epsilon$ and $\delta$ are distances.
Maurice Fréchet generalized this idea further by defining Metric spaces.
For example on $\mathbb R$ the usual metric is called the Euclidean metric where the distance of $x, y \in \mathbb R$ is $|x-y|$.
So $|f(x) -L| < \epsilon$ means the distance of $f(x)$ from $L$ is less than $\epsilon$.
You are correct, by definition $\delta$ depends on $\epsilon$.
A better wording for your second quote in my opinion would be the following:
One can make the error arbitrarily small by choosing a small enough $\delta$.
See also this question.
Hope this helps.
To understand this, you need to think of the intuition behind the $\epsilon$-$\delta$ definition. We want $\lim_{x\to a}f(x)=L$ if we can make $f(x)$ as close to $L$ as we like by making $x$ sufficiently close to $a$. Worded differently, we might say that:
$\lim_{x\to a}f(x)=L$ if given any neighborhood $U$ of $L$, there is a neighborhood $V$ of $a$ such that elements of $V$ are mapped by $f$ to elements of $U$ (except possibly $a$ itself).
In this context, a "neighborhood" of a point $p$ should be understood to mean "points sufficiently close to $p$". Let's make that precise by defining what we mean by "close". For $\epsilon>0$ (assumed, but not required, to be very small) define
$$B(x,\epsilon):=\{y\,:\,|x-y|<\epsilon\},$$
the ball of radius $\epsilon$ about $x$. For our purposes, we say $U$ is a neighborhood of $x$ if $U=B(x,\epsilon)$ for some $\epsilon>0$. (The usual definition only requires that $U$ contains such a ball.) Assuming $\epsilon>0$ is very small, this agrees with our intuition of what closeness should mean. Now if we go back to our neighborhood "definition" of a limit, you should be able to think about it for a bit and convince yourself that it is equivalent to the usual definition.
How does this relate to the problem with infinity? Given that infinity is not a real number (and things like distance from infinity do not make sense), we must revise what it means to be "close" to infinity. So for $M>0$ (assumed this time to be very large) define
$$B(+\infty,M):=\{y\,:\,y>M\},\quad B(-\infty,M):=\{y\,:\,y<-M\},$$
the neighborhoods of $\pm\infty$. Hopefully you can see why these make sense as definitions; a number should be close to infinity if it is very large (with the correct sign), so a neighborhood of infinity should contain all sufficiently large numbers.
Now we extend our neighborhood definition of limits to include the case where $a$ or $L$ can be $\pm\infty$. It is a similar exercise to before to verify now that the definition is still equivalent to the old one, only now we have in some sense unified somewhat.
Best Answer
The definition I learned is
$ lim_{x \rightarrow y}{f(x)} = c $ if and only if for every $ \epsilon > 0 $, there is a $ \delta > 0 $ such that $ 0 < |x - y| < \delta$ implies $ | f(x) - c | < \epsilon $ for all $x$
This is still a little bit technical, lets see what it means. I think it's easiest to read $ 0 < |x - y| < \delta $ as "the distance between $x$ and $y$ is between $0$ and $\delta$".
If we use this interpretation, the definition becomes: $lim_{x \rightarrow y}{f(x)} = c$ if and only if for every $ \epsilon > 0 $, there is a $\delta > 0$ such that if the distance between $x$ and $y$ is between $0$ and $\delta$, the distance between $f(x)$ and $c$ is less than $\epsilon$.
This basically means that $f(x)$ gets arbitrarily close to $c$ (and I think this expression is still used sometimes as a more informal definition) without necessarily becoming $c$ (as James S. Cook pointed out). Suppose $f(x)$ does not get arbitrarily close to $c$, i.e., there is some constant $d$ such that $f(x)$ will stay away from $c$ with at least distance $d$. Then we can show the limit definition does not hold: take $\epsilon = d$. Now there doesn't exist a $\delta$ such that $0 < |x - y| < \delta \rightarrow |f(x) - c| < \epsilon $ (because we just assumed that $f(x)$ would never get closer to $c$ than distance $d$, and remember that this happening is equivalent to $|f(x) - c| < \epsilon = d$).
What may be helpful too is a (trivial) proof using the limit definition. Usually in these proofs, you take an $epsilon$, and return a $\delta$ for which you have proved that $ 0 < |x - y| < \delta \rightarrow | f(x) - c | < \epsilon $ (when dealing with continuous functions you usually just use $|x - y| < \delta$ in your proof). I prove that $\lim_{x \rightarrow y} x = y$ (or, equivalently $f(x) = x$ and $c = y$ in the original formulation).
Take $\delta = \epsilon$. Then $ |x - y| < \delta \rightarrow | f(x) - y | = | x - y | < \delta = \epsilon $. So the condition that there exists some $\delta$ for which $ |x - y| < \delta \rightarrow | f(x) - c | < \epsilon $ holds is true, as I've just shown. More advanced proof usually use a similar logic, but the expression for $\delta$ and working out the $| f(x) - c | < \epsilon$ can become quite hard. This is why other theorems are often used (for continuous functions, $ lim_{x \rightarrow y} f(x) = f(y) $, and compositions, products, sums of continuous functions are again continuous, which can help you out very often).
Also, the definition requires that something must hold for every $\epsilon$. Sometimes, teachers explain this as a game: you can choose $\epsilon$ freely, and you can give a procedure to show that something (there exists a $\delta$ such that...) will hold, then we can say $lim_{x \rightarrow y} f(x) = c$. So, if you win this game by giving such a procedure, you basically got the recipe for a proof!
If you cannot win, it suffices to give a single $\epsilon$ such that $f(x)$ will never get closer to $c$ then distance $\epsilon$.
The wikipedia page on the ($\epsilon$, $\delta$)-definition is pretty good. Also, the picture from there may be helpful when you try to visualize $\delta$ and $\epsilon$ as distances.