Confusion in the meaning of variables in Big-O notation.

asymptoticscomputer science

Trying to learn asymptotic notation, and I read that O(g(n)) = { f(n) : there exists positive constants $c$ and $n_0$ such that $0 \leq f(n) \leq cg(n)$ for all $n \geq n_0$}.
However, I'm not too sure of their meanings. I'd assume f(n) is the algorithm? But what are the other variables for?
Sorry if I'm not even right about f(n) being the algorithm…

Best Answer

$f(n)$ and $g(n)$ are any positive functions. For instance $f(n)$ may describe the running time of a certain algorithm for an input of size $n$, or it may describe the amount of memory a certain algorithm uses for an input of size $n$, but it may also be entirely unrelated to algorithms -- this asymptotic notation is just a mathematical notation which is used in analysis.

To say that $f(n) \in O(g(n))$ means intuitively "$f(n)$ does not grow faster than $g(n)$ times a constant". For instance, $n^2 \in O(n^3)$ because we can take $n_0=1$ and $c=1$ because $0\leq n^2\leq n^3$ for all $n\in\mathbb{N}$. We also have $5n^2 \in O(n^2)$ with $n_0=1$ and $c=5$. But on the other hand we can't say that $n^3\in O(n^2)$ because for every constant $c$ there is some $m\in\mathbb{N}$ such that for every $n>m$ we have $n^3 > cn^2$.

The purpose of the constant $c$ is to allow us to say that functions which are the same up to multiplication by a constant "grow at the same asymptotic rate" in the sense that for any $f(n)$ we have $f(n)\in O(Cf(n))$ and $Cf(n)\in O(f(n))$ for any $C>0$. The purpose of the constant $n_0$ is to allow the functions to possibly behave differently for the first few terms. What matters is what happens asymptotically, that is, for large $n$, and if there are a finite number of exceptions for the rule then $n_0$ is there to allow those.

Keep in mind that a very common abuse of notation is to write $f(n)=O(g(n))$ instead of $f(n)\in O(g(n))$.

Related Question