[Math] Show that $(n + a)^{b}$ = $\Theta(n^{b})$

algorithmsasymptotics

In the book I'm following I got the following solution:

To show that $(n + a)^b = \Theta(n^b)$, we want to find constants $c_1, c_2, n_0 > 0$ such that $$0 \leq c_1 n^b \leq (n + a)^b \leq c_2 n^b$$ for all $n \geq n_0$.

Note that $n + a \leq n + |a| \leq 2n$ when $|a| \leq n$, and $n + a \geq n − |a| \geq \frac{1}{2}n$ when $|a| \leq \frac{1}{2}n$.

Thus, when $n \geq 2 |a|$, $$0 \leq \frac{1}{2}n \leq n + a \leq 2n.$$

Since $b > 0$, the inequality still holds when all parts are raised to the power $b$:
$$0 \leq \left(\frac{1}{2} n\right)^b \leq (n + a)^b \leq (2n)^b, \\ 0 \leq \left(\frac{1}{2}\right)^b n^b \leq (n + a)^b \leq 2^b n^b.$$

Thus, $c_1 = (1/2)^b$, $c_2 = 2^b$, and $n_0 = 2|a|$ satisfy the definition.

Can someone please elaborate how they got to this solution. What is the way to estimate Big Theta? I'm following Intro to Algorithms but it jumps so quickly and never really explains the math in calculating Big O, Big Theta and I'm really lost and scared that I might fail my exam. Can you point me to a good tutorial on how to find Big O step by step?

Thanks for your time I really appreciate your effort!

Best Answer

I think your problem is that you're approaching this from the wrong perspective, thinking the proof is meant to be something it really isn't.

This proof is not an example of a calculation you ought to have been able to do for yourself by now. The proof is there in order to give you the result, as a free fact that you can afterwards use in your own calculations, without worrying about remembering how to prove it. What the proof gives you is a rule saying that if a constant is added or subtracted before something is raised to a fixed power, the added or subtracted constant has no effect on the asymptotic growth of the whole.

There's a fair chance that you're not even supposed to be able to figure out constants for yourself, because your teachers are more interested in getting to the point of actually discussing some algorithms, rather than the minor internal details of the notation used for algorithmic analysis -- which is after all just a tool, not the main goal of the course. This may well be why the details of the proof is being given to you, namely that it saves time compared to training you to construct it from thin air. You should then get with the program and just remember the result, not the irrelevant details of the particular proof.

(And don't worry: with just a bit of experience you will be able to reconstruct this proof on the fly. But there's no reason to panic just because you don't already have this experience before you begin getting it. Also, this is not to say that you should ignore the proof. Make sure you understand that it works, if not how anyone invented it in the first place. Your goal with this understanding should be to get an intuitive feel for how asymptotic growth works, not to try to discern a set of mechanical rules for "finding the answer").


Furthermore, when you speak of "the math in calculating Big O", it sounds like you have the (quite common) misconception that estimating the growth rate of a function is a definite procedure that can be followed by the rules to get one unique correct answer -- like, for example, differentiating a function can be done by following a handful of mechanical rules. This is not true. First of all, everything is big-O and big-$\Theta$ of itself, so if your analysis ends up with $(n+23)^2+3\cdot 2^{n+5\log n}$ it is always correct to call that simply $$ \mathcal O\bigl((n+23)^2+3\cdot 2^{n+5\log n}\bigr)$$ However, there are simpler descriptions of this that are also correct, and your goal would generally be to find as simple a description as you can -- within the resources (time, knowledge) available to you! -- in the hope of getting something that is easy to compare to other such estimates. But how much time to spend polishing the asymptotic notation is not an exact science. For example, if only a small bit of simplification will tell you what you need (the algorithm you're considering is hopelessly worse than the one you already know), there's no point in wasting time to carry the further simplification as far as you possibly can.

It is also often true in practice that you don't want to spend a lot of time proving a super-tight asymptotic bound for your algorithm, if you can quickly prove a slightly looser bound that nevertheless shows that it is good for your purpose.

Relax. Remember the result. Try to understand the growth classes intuitively, given the rules about them you're shown. Remember that this proof was in the book for a reason. If, instead, it had been an exercise, you would have been justified in worrying that you couldn't solve it. But it wasn't.