What actually growth of a function represents

asymptoticscomputer sciencediscrete mathematics

I am going through the book Discrete Mathematics by Kenneth Rosen. In chapter 3, there is a topic "The Growth of Functions" and then Big-O Notation. I am really unable to understand what actually Growth of Functions represents? Kindly share some insights with examples. That would be of great help. Thank you so much for the help.

PS: I am new to this topic.

Best Answer

Not sure if you have a specific question, but it seems you understand big O notation but want some input on its significance. Broadly speaking, I would say it gives us a way to understand the behavior of something complicated or less familiar in terms of something simple and more familiar, particularly as an input size grows.

For instance, consider the function

$$f(n)=9\log n +2\sin n +3(\log n)^2-n^2+n^3.$$

While this function is elementary, its expression is a bit messy. Moreover, perhaps we are not interested in its precise expression. Perhaps we simply want to know its behavior for large $n$. Intuitively, we know the $n^3$ term dominates the expression. Big O notation formalizes this intuition:

$$f(n)=O(n^3)\text{ as }n\uparrow \infty $$

so we can say $f$ behaves "like $n^3$" (has order of $n^3$) for $n$ large. This is a more simple and palatable characterization of $f$ than its full functional form.


The above example was a warm up where the function $f$ had a closed form, but there are several applications where characterizing the growth of a function is particularly useful because we simply don't have a closed form expression for it. I will briefly describe a couple applications that come to mind:

i. Number theoretic results can be usefully stated using big O notation. For instance, there is no known efficiently computable formula for the exact sequence of prime gaps (the difference between consecutive prime numbers). Still, we can state important results about the long-run size of these gaps. For instance, the best known unconditional bound on the size of gaps is

$$p_{n+1}-p_n=O(p_n^{0.525}),$$ where $p_n$ is the $n$th prime number.

It turns out that the Riemann hypothesis implies slightly smaller gaps of $$p_{n+1}-p_n=O(\sqrt {p_n} \log p_n).$$ However, Cramer's conjecture, which is based on a probabilistic model of prime numbers, asserts that the gaps are even sharper:

$$p_{n+1}-p_n=O( (\log p_n)^2).$$

ii. Another application is in algorithms. An algorithm is a finite sequence of instructions, but may not have a clean functional form. Characterizing the runtime of an algorithm using big O notation thus gives a useful snapshot metric of its performance, and its quantitative nature allows one to compare the performances between different algorithms (i.e. it measures their efficiency).

For instance, consider solving the travelling salesman problem (TSP), which asks for the shortest route that visits every city on a map exactly once and returns to the origin, given a list of $n$ cities and distances between them. A natural question to ask about an algorithm that solves it is: How many elementary operations does it take solve TSP, particularly when the number of cities, $n$, is large?

A brute force algorithm that checks all permutations on the map gives an exact solution and takes factorial time, i.e. it is $O(n!).$ This means the number of elementary operations needed to solve the problem by brute force grows superexponentially as you add more cities. However, the Held–Karp algorithm, an approach based on dynamic programming, also gives an exact solution but has runtime $O(n^2 2^n),$ which is much better than factorial time (it only takes exponential time). We can make such quantitative comparisons between algorithms by studying the growth of their runtime with the help of big O notation.

Wiki has a nice table of time complexities with some example algorithms that you may find of interest. Time complexity is certainly a fascinating topic for further reading if you are interested in algorithms.