[Math] comparing functions in big O notation

asymptoticsfunctions

I am extremely new a calculating big O notation and I am extremely confused by this quote from the book Discrete Mathematics and Its Applications

For instance, if we have two algorithms for solving a problem, one using 100n^2 + 17n + 4 operations and the other using n^3 operations, big-O notation can help us see that the first algorithm uses far fewer operations when n is large, even though it uses more operations for small values of n, such as n = 10.

So if n is less than 10 (E.G. 5), are we just going to substitute it like:

100*5^2 + 17*5 + 4

But this is larger than 5*3.

How to correctly calculate these kind of functions? Is calculus needed for this?

Best Answer

You don't need to calculate anything. Just note: In a polynomial the fastest growing (decreasing) term will always be the monomial of highest order. For a general polynomial

$a_0+a_1x+...+a_nx^n$

this is the $a_nx^n$.

Meaning: For large values of n this term will dominate all the others so you could also just have a look at the highest order term in order to argue how the function behaves for large x, right? This is somehow what the quote from the book tells you. For large values of the variable (n in your case) the second algorithm will be very expensive (especially more expensive than the other algorithm). Imagine for example that the 2 algorithms (just exemplary) multiply matrices, and n is the dimension of the matrix.

Big O notation tells you, that for very large system you should use the first algorithm, as it is less expensive ($cn^2$ operations) than the other one ($cn^3$)

Edit: you can also take into consideration terms like ln(x) (very slow growing) or $e^x$ (very Very fast growing)

Related Question