For the purposes of limits, the precise dependence of $\delta $ on $\epsilon$ is simply not important. The proofs do not require any knowledge of that relation. As said, $\delta (\epsilon)$ usually tends to $0$ as $\epsilon$ tends to $0$, but this is not always the case. To make this precise, let $\delta(\epsilon)$ be the largest $\delta$ corresponding to $\epsilon$ in the definition of continuity at $x$ for a function $f$. Thus, $\delta(-)$ is a function whose domain is $(0,\infty )$ and whose range is $(0,\infty]$, and it is monotonically non-decreasing. If $f$ is a constant function, then $f(\epsilon)=\infty $ for all $\epsilon>0$, showing that indeed $\lim_{\epsilon\to 0}\delta(\epsilon)$ need not be $0$.
The definition of limit captures the following: $\lim_{x\to a}f(x)=L$ means that for any prescribed distance $\epsilon>0$, there exists some upper bound for distances $\delta$ such that if $x\ne a$ is within $\delta $ units from $a$, then $f(x)$ is guaranteed to be within $\epsilon$ units from the limit $L$.
Remark: The function $\delta(-)$ above is known as a modulus of continuity for $f$. Functions whose moduli of continuity have certain properties (e.g., are concave) are of importance.
When $\delta(\varepsilon)$ is written as you have above, it is merely a notational reminder that our choice of $\delta$ has to depend on the $\varepsilon$ we're given -- nothing more. In fact, $\delta$ also depends on $f$, $a$, and $L$. Writing $\delta(\varepsilon)$ does not mean that $\delta$ is a function to which we may plug in $\varepsilon$ to get our limit-satisfying $\delta$-value. In that vein, the also-common notation $\delta_\varepsilon$ could be argued to be better. However, we could create an actual function which acts in the spirit of the aforementioned $\delta(\varepsilon)$ and addresses your objection that we're "throwing out" other perfectly good values of $\delta$. This is most vivid if we restrict our attention to the following setup.
Let $A \subseteq \mathbb R$ be open and $f \colon A \longrightarrow \mathbb R$ have limit $L$ at $a$:
$$
\lim_{x \to a} f(x) = L.
$$
We may define
\begin{align}
\begin{split}
\delta_*(\varepsilon) &= \sup\{ \delta > 0 : a-\delta < x < a \implies |f(x) - L| < \varepsilon \}, \\
\delta^*(\varepsilon) &= \sup\{ \delta > 0 : a< x < a + \delta \implies |f(x) - L| < \varepsilon \},
\end{split}
\tag{1}
\end{align}
with the idea that $\delta_*(\varepsilon)$ tells you how far left of $a$ you can let $x$ go while keeping $|f(x) - L| < \varepsilon$, and $\delta^*(\varepsilon)$ tells you how far right of $a$ you can let $x$ go while keeping $|f(x) - L| < \varepsilon$. (We know that $\delta_*, \delta^* > 0$ exist because those sets on the RHS of $(1)$ are nonempty according to the limit definition.) Hence the largest open $x$-interval for which $|f(x) - L| < \varepsilon$ is $$
X(\varepsilon) = \big( a - \delta_*(\varepsilon), a + \delta^*(\varepsilon) \big).
$$
An issue here is that $X(\varepsilon)$ is not (necessarily) symmetric about $a$, so it doesn't (necessarily) correspond to $|x - a| < \delta$ for any $\delta$. To remedy this, we define $\hat \delta (\varepsilon) = \min\{\delta_*(\varepsilon), \delta^*(\varepsilon)\}$; then any $x$ in the interval
$$
X'(\varepsilon) = \big( a - \hat \delta(\varepsilon), a + \hat \delta (\varepsilon) \big)
$$
will satisfy $|f(x) - L| < \varepsilon$. Note that $X'(\varepsilon) = \{ x : |x - a| < \hat \delta(\varepsilon)\}$, and hence any $\delta$ in the interval $I(\varepsilon) = \big( 0, \hat \delta(\varepsilon) \big]$ satisfy the $\varepsilon$-$\delta$ definition of our limit. Moreover, $I(\varepsilon)$ is the largest set of values of $\delta$ that will work for a given $\varepsilon$. In other words:
$\delta$ satisfies the $\varepsilon$-$\delta$ definition $\iff \delta \in I(\varepsilon)$.
This answers your Question 1. A proof of this follows @grand_chat's answer. Note that $I(\varepsilon)$ depends on $a$, $f$, and $L$ implicitly.
One thing that may bother you is that $X(\varepsilon)$ may be much bigger than $X'(\varepsilon)$, so we're "throwing out perfectly good $x$'s". The $\varepsilon$-$\delta$ definition restricts $X(\varepsilon)$ to a symmetric interval ($X'(\varepsilon)$) about $a$. Does this help address your rigor question?
Of course satisfying the definition of a limit only requires us to find one such $\delta$. The reason that this is what you describe as the preferred method by professors etc. is the existence of complicated functions $f$ which make computing $I(\varepsilon)$ very difficult: it amounts to solving $f(x) = L \pm \varepsilon$ for $x$, which is inverting $f$. Since they don't need to find $I(\varepsilon)$, but just a single point in it, they opt for less work.
Your example of a "linear" function happens to be one in which the imprecise $\delta(\varepsilon)$ which people often write coincides with $\delta_*(\varepsilon) = \delta^*(\varepsilon) = \hat \delta (\varepsilon)$ in a quite canonical way, which may deceive people into believing some property of uniqueness for $\delta(\varepsilon)$.
Best Answer
This is a bit too long for a comment: Start by assuming you are given some $\varepsilon > 0$. Then, if you have found a suitable $\delta$, for any $x$ satisfying $|x+1|<\delta$ we would have, $$|x^2-2x+3-6| = |(x-3)(x+1)| < \delta |x-3| $$ But if $|x+1| < \delta$ then $|x-3|$ cannot be too big either: in fact you have $$|x-3| = |x+1 - 4| < |x+1| + 4 < \delta + 4,$$ and taking the two inequalities together gives, $$ |x^2-2x+3-6| < \delta(\delta +4).$$ Now you just have to find some $\delta$ so the right hand side is less than $\varepsilon$. You could try playing with some numbers here, but it is not difficult to see that if $\delta$ is less than both $\frac{1}{5} \varepsilon$ and less than $1$ (which is the same as saying it is less than the minimum) you get what you need.