Finding Limit vs Proving Limit of *something*.

definitionlimitsreal-analysisterminology

I get confused everytime between finding a limit and proving that something is a limit. If I calculate the limit of a function $f(x)$ at $a$ and say, I get $\lim\limits_{x \to a} f(x) =L$ where $a\in [-∞,∞]$ then is it not the same as showing that $L$ is the limit of $f$ at $a$?

However, whenever I read answers here on this site or on other math sites, I notice that when they're about to calculate a limit of a function or some weird expression, they start off by saying, "assuming the limit exists" or "assuming the expression converges" and then proceed to calculate. What's that about? I don't get it, why do you have to assume that it exists or not? If you do find a limit then surely the thing has a limit, otherwise what does it even mean to find a limit?

My Questions:

  1. Can someone give me a clear definition of what it means to find or compute vs what it means to prove. And maybe explain the difference through an example?
  2. Explanation of the phrase, "assuming an expression converges" (Note: not just a function but any expression in some variables, what
    does it mean for it to converge in simple English)?
  3. What is the logic behind this? Why do we do this? What logical errors would we encounter otherwise?

Edit: I'm updating my question and including an example of what I'm trying to ask.
Say, $f(n)= \dfrac{3n^2+1}{2n^2-89}$ then, I can compute the limit:
$$\lim\limits_{n \to ∞} f(n) = \lim\limits_{n \to ∞} \dfrac{3n^2+1}{2n^2 -89} = \lim\limits_{n\to ∞} \dfrac{3 + \frac{1}{n^2}}{2- \frac{89}{n^2}} = \dfrac{\lim\limits_{n \to ∞} \left(3 + \frac{1}{n^2}\right)}{\lim\limits_{n \to ∞}\left(2- \frac{89}{n^2}\right)}= \dfrac{3}{2}$$

Here, I can justify how I can go from one step to the other. For instance, I can go from step 2 to step 3 because as $ n \to ∞, n ≠0$, I can make this more rigourous but I'll not for brevity. Now, why do I have to prove that $3/2$ is really the limit of $f(n)$ when $n \to ∞$.


In the example below in the comments, Mr. Dave give $x_n = 2+4+8+ \ldots + 2^n$ and claim (without actually computing) $L$ to be the limit. Since, he has not computed and found $L$ to be real number. He cannot use it around like a real number as well, so the manipulation that follows in his example doesn't make any sense. Whereas I have found my limit (and it is a real number) through proper justification and reasoning while going from one step to the other. Why do I still need to prove that $3/2$ is the limit or that $f(n)$ converges? What is the logic/reasoning behind it?

Best Answer

When your method for calculating a limit is valid, i.e. the method is justified as a consequence of the definition of a limit (for example, the definition of the limit of a sequence of real numbers implies that whenever $\lim x_n = x$ and $\lim y_n = y$ then $\lim x_n + y_n = x + y$ for any two sequences $(x_n),(y_n)\in \mathbb{R}^\mathbb{N}$ and reals $x,y\in\mathbb{R}$), obviously finding the limit is proving such a limit exists. The converse of course is not true.

A typical example of limits we usually can prove to exist but often we do so without computing explicitely their values are the limits of real sums of the form $\lim_{n\to\infty} \sum_{k=0}^n c_k$ for $c_k \in\mathbb{R}$. In this case saying the limit exist amounts to say the series converges, i.e. $\sum_{k=0}^\infty c_k = x$ for $x\in\mathbb{R}$, but calculating $x$ is usually harder than (dis-)proving this convergence.

When you compute a limit of a real function through the "typical" means, apparently without knowing it, you are using a set of rules derived from the definition of a limit as well as different theorems, so for example you might say $\lim_{t\to\infty} \frac{1}{t} = 0$ because, by the monotone convergence theorem, the limit must be equal to its infimum, namely $0$.

The definition of convergence, in English, can be found in any dictionary which, as far as I know, this forum is not. To build an intuition of the meaning of mathematical convergence you probably should start by considering more thouroughly the definition of limit of a sequence or of a function on a metric space. In plain words, and for example, the limit of a real function $f$ at a point $x_0 \in \mathbb{R}$, when the limit exists, is this value $L\in\mathbb{R}$ that, the closer $x$ gets to $x_0$, the closer $f(x)$ gets to $L$.

You might wonder "Why do you specify when the limit exists?". Well, you cannot talk of the limit $L\in\mathbb{R}$ of $f$ at $x_0$ if, as a matter of fact, there is no $L\in\mathbb{R}$ that $f(x)$ approches as $x$ approaches $x_0$. For example, we cannot properly talk of the limit of $\frac{1}{t-1}$ at $1$ because there is no real number that $\frac{1}{t-1}$ approaches as $t$ approaches $1$. We conventionally note $\lim_{t\to 1}\frac{1}{t-1} = \infty$ not to say $\infty$ is a real number to which $\frac{1}{t-1}$ converges as it approaches $1$, but to say precisely the opposite: that $\frac{1}{t-1}$ diverges as it approaches $1$ and that, concretely, as $\frac{1}{t-1}$ aproaches this value it becomes unlimitedly large.

Related Question