Coding Theory – How to Detect Double Error Using Hamming Code

coding-theorycombinatoricsdiscrete mathematics

I have a sequence of bits
$$
111011011110
$$
and need to detect two errors(without correction) using Hamming codes. Hamming codes contain a control bit in each $2^n$ position. Hence I should put this control bits in their positions.

$$
0010110011011110
$$

I've found a simple explanation of how to count the code for a sequence of bits. It says that each control bit responds for the following bits using these rules: First control bit responds for $2^n$ position and each following bit through $2^n$ . So the first bit responds for the first, third, fifth and etc. bits. The second control bit responds for 2nd, 3rd, 6th, 7th, 10th, 11th and etc. bits. Third control bit(which is on the 4th position) responds for 4th, 5th, 6th, 7th, 12th, 13th etc. bits. And so on. The value of each of the controls bits is counted as a modulo sum of the bits, which this control bit responds for.

Here is an illustration of what I mean:

enter image description here

Assuming this rule is right, the last 16th bit(after control bits addition) is not under the responsibility of any of the control bits.

So the question is: How can I detect double error(only detect, not correct) for the given sequence of bits using the Hamming code?

Best Answer

This is handled well by Wikipedia, which states:

Hamming codes have a minimum distance of 3, which means that the decoder can detect and correct a single error, but it cannot distinguish a double bit error of some codeword from a single bit error of a different codeword. Thus, they can detect double-bit errors only if correction is not attempted.

To remedy this shortcoming, Hamming codes can be extended by an extra parity bit.