[Physics] Negative SNR and Shannon–Hartley theorem

experimental-physicsnoisesignal processing

It is intuitive to think that if the noise amplitude is more than signal amplitude, it will obscure the signal. But using Shannon–Hartley theorem, one can see that a receiver can read the signal even if the SNR is negative provided the bandwidth is high enough. What is the intuition behind this?

The Shannon–Hartley theorem states the channel capacity C, meaning the theoretical tightest upper bound on the information rate of data that can be communicated at an arbitrarily low error rate using an average received signal power S through an analog communication channel subject to additive white Gaussian noise of power N:
$C = B \log_2 \left( 1+\frac{S}{N} \right) $
where
C is the channel capacity in bits per second, a theoretical upper bound on the net bit rate (information rate, sometimes denoted I) excluding error-correction codes;
B is the bandwidth of the channel in hertz (passband bandwidth in case of a bandpass signal);
S is the average received signal power over the bandwidth (in case of a carrier-modulated passband transmission, often denoted C), measured in watts (or volts squared);
N is the average power of the noise and interference over the bandwidth, measured in watts (or volts squared); and
S/N is the signal-to-noise ratio (SNR) or the carrier-to-noise ratio (CNR) of the communication signal to the noise and interference at the receiver (expressed as a linear power ratio, not as logarithmic decibels).

Source https://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theorem

Best Answer

The SNR in Shannon's equation is the signal power divided by the noise power, S/N. It is a number, and power cannot be less than 0, neither can noise. You are thinking of the SNR in dB, label it SNR(dB). You are thinking of the value in dB, and for SNR =1 indeed the SNR(dB) is 0, and for SNR < 1 (ie, noise greater than signal power) SNR(dB) is negative.

Similarly for SNR = 0, the log(1+SNR) is 0. For SNR < 1 (ie, SNR(dB) negative) the log(1+SNR) gets closer and closer to 0.

So, yes, indeed one can read (or as Shannon would say perfectly decode a signal or noisy information) for negative SNR's in dB. It happens all the time in for instance spread spectrum communications (which is a way of coding a signal, or generally, information). See an example further down. But actually there is a Shannon Limit, on what is called the SNR per bit, defined below as Eb/No. It is actually the most important entity.

One can perfectly decode or read a signal or information down to that limit. Below the limit you cannot read without erros, and the erro rate increases exponentially.

A good way to see what really happens is to write Shannon's equation

C = B $log_2$(1+SNR)

as C/B = $log_2$(1+SNR),

and then using SNR = S/NoB (with No the noise power density) you get

C/B = $log_2$(1+S/NoB). Then, setting S = signal power = EbC, where Eb is energy per bit, and setting

z == SNR = Eb/No x C/B, (Eq. 1)

with Eb/No the SNR per bit, one gets

C/B = $log_2$(1+EbC/NoB) = $log_2$(1+z) = z $log_2$ $(1+z)^{1/z}$

so

C/Bz = $log_2$ $(1+z)^{1/z}$

or using Eq. 1

No/Eb = $log_2$ $(1+z)^{1/z}$

In the limit as B goes to infinity with a finite C, so C/B goes to zero, and so z goes to zero, so

one gets, since limit as z goes to zero of $(1+z)^{1/z}$ = e

so one gets

limit as B goes to infinity of Eb/No = 1/$log_2$ e = 0.693

or, as B goes to infinity Eb/No goes to 0.693, and Eb/No (in dB) = -1.6 dB

You cannot decode a signal (or information) where the Eb/No (dB)= SNR per bit = < -1.6 dB. If you plot B/C vs Eb/No, the asymptote is at -1.6 dB.

That is called the Shannon limit. It is not possible to do better.

But we do decode negative SNR's (in dB) all the time. A spread spectrum signal with B/C = 100, can have,

SNR = Eb/No x C/B, or in dB,

SNR(dB) = Eb(No(dB) - 20 dB

so we could get SNR = -1.6 - 20 = -21.6 dB

That could happen, for instance, with a extremely well coded spread spectrum signal with 20 dB (so called) processing gain. The limit would be -21.6 dB of SNR. Usually we don't get to the limit. The same is true with error correcting codes, one uses them to make the SNR lower, while keeping enough Eb/No to decode well enough. Shannon's theorem does not tell us how to construct those codes, only that it is possible.

See Shannon in Wikipedia at https://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theorem

See the Shannon limit explained starting at http://news.mit.edu/2010/explained-shannon-0115 and at YouTube at https://www.youtube.com/watch?v=Wq1-Iq9Vm28. See a Shannon limit graph with R actual bit rate, and R/B the spectral efficiency, pasted from http://www.gaussianwaves.com/2008/04/channel-capacity/

Shannon Limit

Related Question