[Math] Find probability of error in Binary Symmetric Channel with LLN

information theorylaw-of-large-numbersprobability

Consider a binary symmetric channel with probability of error $p$.

enter image description here

Consider the following scheme of encoding a message for transmitting one bit.

If we have to transmit 0 then we send certain number of zeros $000 \cdots 0$.
If we have to transmit 1 then we send certain number of ones $111 \cdots 1$.

The scheme for decoding would be as follows.

Observe output sequence $output$

Decide $output = 0$ if majority of zeros in $output$.

Decide $output = 1$ if majority of ones in $output$.

My question is the following: how can I proof that the probability of getting an error in this process goes to zero as $n$ (the number of zeros we send) goes to infinity. I'm supposed to use the Law of Large Numbers but I'm not sure how to.

Best Answer

Let $X$ denote the total number of flipped bits among the $n$ transmitted bits.

$X$ is a random variable following the binomial distribution $Bin(n, p)$, with average $~E[X] = np~$ and variance (sqaure of standard deviation) $~V[X] = npq~$, where $q = 1-p$.

We get an error when the fraction of the bits (among $n$) that are flipped is over half. Thus we are interested in the scaled random variable $$\begin{align} Y &\equiv \frac{X}n \quad \text{with} & E[Y]&=\frac{E[x]}n = p\\ && V[Y]&=\frac{ V[X]}{n^2} = \frac{pq}{n} \end{align}$$

Therefore, we need the strict $p < \frac{1}2$ to make the whole process work, such that

  1. On average the fraction of flipped bits is $E[Y]=p$ that is less than half.
  2. In the long run $n \to \infty$ the standard deviation $~\sigma \equiv \sqrt{V[Y]} = \sqrt{ \frac{pq}n }~$ vanishes, meaning all the probability is concentrated around the mean (average) with zero (vanishing) width. Having an error ($y > p_0$ where $p_0 = \frac{1}2$) is at a finite distance ($p_0 - p > 0$) away so the chance goes to zero.

Slightly more formally, the probability of $Y$ (the fraction of interest) being at any distance $\Delta$ away from the mean $p$ is

$$ Pr\bigg\{ ~ |Y - p| \geq \Delta ~\bigg\} \leq \left(\frac{\sigma}{\Delta} \right)^2 \qquad \to 0 \quad \text{as} \quad n \to \infty$$

This is a common and elementary form of Chebychev inequality.

This is exactly the Law of Large Numbers applied in this specific context: the vanishing variance of the `sample mean' $Y$ results in the average of the sample mean $E[Y]$ approaching the true mean $p$, where $B_k$ are iid Bernouli random variables of two outcomes (success with $p$ failure with $q$), $E[B_k] = p$ for all k, and $X \equiv \sum_{k = 1}^n B_k$.

Related Question