I having trouble to understand the proof of arithmetic infinity limits.
(I'm quoting from my learning book)
f,g are functions and lets assume that :
$$\lim_{x \to x_0}f(x)=L \mbox{ (final)}$$
$$ \lim_{x \to x_0} g(x)=\infty $$
Prove that :
$$\lim_{x \to x_0}(f+g)(x)=\infty$$
f,g are defined in $N*\delta(x_0)$ (pocked environment)
We need to show that for all M>0 exist $\delta_*$>0 so all $x$ that appiles $0<|x-x_0|<\delta_*$ appiles $(f+g)(x)>M$
there is M>0 big enough that $M>L-1$.
$$\lim_{x \to x_0}g(x)=\infty$$
Therefore, exist $0<\delta_1<\delta,\delta_1$ so all x that appiles $0<|x-x_0|<\delta_1$ appiles $g(x)>M-L+1$
As well, $$\lim_{x \to x_0}f(x)=L$$
Therefore exist $0<\delta_2<\delta,\delta_2$ so all x appiles $0<|x-x_0|<\delta_2$ appiles $|f(x)-L|<1$ so $f(x)>L-1
We choose $\delta_*$=min{$\delta_1,\delta_2$} therefore for all x appiles $0<|x-x_0|<\delta_*$ appiles :
$(f+g)(x) = f(x)+g(x) > L-1+M-L+1=M$.
I don't understand how there is $M>0$ that $M>L-1$?
In addition that statement "$f(x)+g(x)>L−1+M−L+1=M$"
Best Answer
The statement to prove is the following.
To prove this recall that $$\lim \limits_{x\to x_0}\left(f(x)\right)=L\iff \forall \varepsilon >0\,\exists \delta _2>0\,\forall x\in I\left(0<|x-x_0|<\delta _2\implies |f(x)-L|<\varepsilon\right)$$ and $$\lim \limits_{x\to x_0}(g(x))=+\infty\iff \forall \color{purple}M>0\,\exists \delta _1>0\,\forall x\in I(0<|x-x_0|<\delta _1\implies g(x)>\color{purple}M).$$
Remember the goal is to prove that $\lim \limits_{x\to x_0}\left((f+g)(x)=+\infty\right)$ or equivalently $$\forall \color{blue}M>0\,\exists \delta_*>0\,\forall x\in I(0<|x-x_0|<\delta _*\implies (f+g)(x)>\color{blue}M).$$
Proof: Begin by taking an arbitrary (blue) $\color{blue}M>0$.
Either $\color{blue}M\leq L-1$ holds or $\color{blue}M>L-1$ does.
$\bbox[5px,border:2px solid #000000]{\text{Case: }\color{blue}M> L-1}$
The goal is to find $\delta _*>0$ such that $\forall x\in I(0<|x-x_0|<\delta _*\implies g(x)>\color{blue}M)$.
With $\varepsilon=1$ one gets the existence of $\delta _2>0$ with the property that $\forall x\in I(0<|x-x_0|<\delta _2\implies |f(x)-L|<1)$.
With $\color{purple}M=\color{blue}M-(L-1)\color{grey}{>0}$. one gets the existence of $\delta _1>0$ such that $\forall x\in I(0<|x-x_0|<\delta _1\implies g(x)>\color{blue}M-(L-1))$.
Now define $\delta _*:=\min\left(\{\delta _1, \delta _2\}\right)$. The goal is now to prove that $\forall x\in I(0<|x-x_0|<\delta _*\implies (f+g)(x)>\color{blue}M)$.
Take $x\in I$ and assume that $0<|x-x_0|<\delta _*$.
Since $\delta _*\leq \delta _2$ one gets $|f(x)-L|<1$, i.e., $-1<f(x)-L<1$ which implies $L-1<f(x)$.
Since $\delta _*\leq \delta _1$ one gets $\color{blue}M-(L-1)<g(x)$.
Therefore $L-1+\color{blue}M-(L-1)<f(x)+g(x)$, that is, $(f+g)(x)>\color{blue}M$.
$\bbox[5px,border:2px solid #000000]{\text{Case: }\color{blue}M\leq L-1}$
Hopefully, after reading the case above, you can get an idea to solve this case.