[Math] Asymptotically optimal algorithms

algorithmscomputational complexitycomputer science

Suppose one has an algorithm to solve a problem using at most f(n) computations with size of input n. How to prove, if such is the case, that this algorithm is the fastest possible for solving this problem?

I am aware of the open P!=NP problem, but I assume there are nontrivial problems where it has been proved that no faster algorithm exist? Which are such examples, and how is it proved?

Best Answer

There's no general way, but only a wealth of techniques. Here are some examples:

  • You prove that $f(n)$ is the minimum time required to read the input. For example, shortest path problem cannot be solved in less than $O(V + E)$.
  • You prove that an algorithm with fewer will not get enough data about the input to output the correct solution. Thus, once can change the input in a way that changes the result without the algorithm "noticing" the difference. An example of this would be the proof that sorting cannot be done in less than $O(n \log n)$.
  • You prove that an algorithm running in less time will lead to an existence of an algorithm that would solve a problem in less than the optimal time (e.g. sorting in less than $O(n \log n)$). This is achieved via reduction.
  • You prove that the size of the output is in worst case as big as $f(n)$ therefore, one cannot take shorter time to compute it. For example, outputting the $n^{th}$ Fibonacci number cannot be solved in less than $O(2^{n})$ since the $n^{th}$ Fibonacci is ~ $\phi^{n}$ where $\phi$ is around $1.6$

There are more techniques, but those are some examples off the top of my head.

EDIT: As pointed out in the comments, the size of the $n^{th}$ Fibonacci numbers is exponential in the length (read log) of n, therefore it cannot be computed in less than $O(n)$ not $O(2^{n})$.