Is faster than linear addition possible using parallel computing

algorithmsarithmeticcomputational complexity

I saw on some YouTube video a comment that claims that it's possible to add two n-bit numbers in $o(n)$ time (i.e., faster than linear), at first I thought this is correct since it was written on the internet and everything that's on the internet must be true, but I couldn't thought of any way to do it so I thought of asking it here.

Is it possible to add two n-bit numbers in less than $O(n)$ operations? Assume you have unlimited number of cores/processors you can use to parallel compute.

If it was for example xor instead of addition it's easy to see how you can achieve an $O(log(n))$ algorithm by spliting the number to n/2 2-bit numbers computing xor on those using n/2 different cores and then continue to do the same for the $n/2$-bit number we get from the results of the xors, but can we do something similar to addition? If not is there a proof it's not possible?

Best Answer

To expand on a comment to user21820's excellent answer: one can actually show that addition can be implemented in constant depth, with a polynomial-size circuit involving AND, NOT, and OR gates with arbitrary fan-in. That is, in circuit complexity terms, addition can be implemented in the class $\textsf{AC}_0$.

See, for instance, those lecture notes.

Now, the "arbitrary fan-in" part may sound like a big assumption; however, since it is known that $\textsf{AC}_0\subseteq \textsf{NC}_1$ (where $\textsf{NC}_1$ is the class of functions computable by $O(\log n)$-depth, poly-size circuits with fan-in $\leq 2$ gates), this is in fact a stronger statement than saying addition can be implemented in logarithmic depth.

Related Question