The visual explanation and intuition behind the “asymptotic equivalence” and the “order of growth” (BigTheta and Tilde notations)

asymptoticsintuitionlimits

I'm trying to grasp the essence of asymptotic equivalence and the order of growth definitions and meaning, but it's already taking too much time. Let's start with the tilde notation. From Wikipedia:

$$
{
f(x) \sim g(x) \quad (as \quad x \to \infty)
}
$$

if and only if:

$$
{
\lim_{n\to\infty} \frac{f(x)}{g(x)} = 1
}
$$

So, if we take the following functions as an example:

$$ \bbox[yellow]
{
f(x) = x^2 + x \cdot \log_2 x
\qquad (1)
}
$$

and
$$ \bbox[yellow]
{
g(x) = x^2
\qquad (2)
}
$$

and plot them for x=0..10:

plot functions in range 0..10

they doesn't seem identical here. But then, if we draw for x=0..10000:

plot functions in range 0..10000

they became almost identical. Clearly, it works as x approaches infinity. But why?

In contrast, if we use $g(x) = x^3$ instead, the limit will be equal to 0, which means, that this pair of functions isn't asymptotically equivalent.

But how does it work? Why functions are asymptotically identical if and only if
$
{
\lim_{n\to\infty} \frac{f(x)}{g(x)} = 1
}
$
.
Why can we trust this definition? Is there a visual explanation of it?

Best Answer

If you want to talk about two functions' behavior as $x \to \infty$, there are several ways to compare them.

One might be to talk about the absolute error between them $|f(x)-g(x)|$ and ask if it tends to zero. Graphically this is easy to visualize in the kind of plots you are making: the vertical gap between the graphs of $f$ and $g$ shrinks to zero as $x \to \infty$.

This is a very strong notion of "$f$ and $g$ behave the same as $x \to \infty$," since it excludes examples such as $f(x)=x+1$ and $g(x)=x$.

There may be situations where you don't care about the absolute error but are more concerned with the relative error $\frac{f(x)}{g(x)}$. This is a more relaxed notion of "$f$ and $g$ behave the same as $x \to \infty$." Note that it holds for the above example $f(x)=x+1$ and $g(x)=x$.

The intuition is that if $f$ and $g$ both get large as $x \to \infty$, we are more and more willing to tolerate larger absolute errors between them, as long as these absolute errors are small relative to the value of the functions. Explicitly, note that $|\frac{f(x)}{g(x)}-1| = \frac{|f(x)-g(x)|}{|g(x)|}$. It is ok for the absolute value $|f(x)-g(x)|$ to not shrink to zero, as long as it is small relative to $|g(x)|$. Note also that we have the implication $|f(x)-g(x)| \to 0 \implies |\frac{f(x)}{g(x)} - 1| \to 0$.

Unfortunately, I don't think it is easy to determine this from plots of $f$ and $g$. For instance my example of $f(x)=x+1$ and $g(x)=x$ would be two parallel lines, which you might say becomes "almost identical" if you zoom out far enough, but you can come up with examples where a plot (which necessarily can't take $x \to \infty$) seems to show that $f$ and $g$ are "nearly identical," but don't actually satisfy $\frac{f}{g} \to 1$.

As a practical example of why relative error can be useful, suppose $f$ and $g$ are the number of minutes two algorithms need to process $x$ inputs. If $f(x)=x+1$ and $g(x)=x$ they are both basically requiring one minute per input, but the first algorithm needs an extra minute, maybe for set-up time. The extra minute that the first algorithm requires is not really a big deal when $x$ is large: for example if there are a million inputs, we are not really concerned about the difference between $10^6$ minutes and $10^6 + 1$ minutes.

Big-O notation is an even further relaxation of this notion.