Discrepancy in minimum number of bits required to represent a number

binaryintegers

Right off the bat, I want to say that I know the How many bits? question has already been asked many times, in many ways.

I have done my best to comb through the answers given in the previous posts, but there seem to be two conflicting answers. For example, in this post, there is a disagreement on the following:

Given an unsigned integer $n$, what is the minimum number of bits $N_b$ required to represent it?

The first (and marked correct) answer is: $N_b=\lfloor{\log_2(n)+1}\rfloor$

However, users in the comments and answers below claim that the above is wrong, and that the actual answer is: $N_b=\lceil\,\log_2(n+1)\rceil$

When I try to figure it out myself, I come up with the second answer:

Need $N_b$ s.t. $n\leq 2^{N_b}-1$, which implies that $N_b\geq\log_2 (n+1)$. And since we cannot have fractional $N_b$, but know that $\log_2 (n+1)$ is the lowest $N_b$ can be, we must round up to the next integer. Therefore we have $N_b=\lceil\,\log_2(n+1)\rceil$.

Does anyone know where the first answer comes from and/or why there is a discrepancy/disagreement between the two?

Best Answer

First, note that $\log_2(n)$ is an integer $p$ iff $n=2^p$, and $\log_2(n)$ is strictly increasing with $n$.

  • First expression

For $n\in [2^p,2^{p+1}-1]$, $\log_2(n)\in[p,p+1[$., which means $\lfloor\log_2(n)+1\rfloor=p+1$, which is the number of bits of these values of $n$. This is true for all $p\ge0$, hence for all $n>0$.

  • Second expression

For $n\in [2^p,2^{p+1}-1]$, we have $n+1\in [2^p+1,2^{p+1}]$, hence $\lceil\log_2(n+1)\rceil=p+1$, which is also the expected result.

Both expression are correct for $n>0$. For $n=0$, the first is undefined, and the other yields $0$. It's not entirely clear how the number of bits should be defined for $n=0$, but I'd say you still need one bit.