Discrete Mathematics – Understanding Definition of Big-O Notation

asymptoticsdiscrete mathematics

In a textbook, I came across a definition of big-oh notation, it goes as follows:

We say that $f(x)$ is $O(g(x))$ if there are constants $C$ and $k$ such that $$|f(x)| \le C|g(x)|$$ whenever $x \gt k$.

If I'm not mistaken, this basically translates to: $$f(x) = O(g(x)) \Leftarrow\Rightarrow (\exists C,\exists k|C,k \epsilon \Bbb R \land (x \gt k \rightarrow |f(x)| \le C|g(x)|))$$
Now, I have two questions regarding this statement:

  1. Is my verbose translation correct?

  2. What exactly does this definition of big-oh notation mean about the usage of big-oh notation, because from what I understand through computer science, big-oh is used to represent the computational complexity of an algorithm. So how does this relate to the complexity of an algorithm (if at all)?

Best Answer

Your translation is correct. The intuition behind big-oh notation is that $f$ is $O(g)$ if $g(x)$ grows as fast or faster than $f(x)$ as $x \rightarrow \infty$. This is used in computer science whenever studying the time complexity of an algorithm. Specifically, if we let $f(n)$ be the run-time (number of steps) that an algorithm takes on an $n$ bit input to give an output, then it may be useful to say something like $f$ is $O(n^2)$, so we know that the algorithm is relatively fast for large inputs $n$. On the other hand, if all we knew was $f$ is $O(2^n)$, then $f$ might run too slowly for large inputs.

Note I say "might" here, because big-oh only gives you an upper bound, so $n^2$ is $O(2^n)$ but $2^n$ is not $O(n^2)$.

Related Question