In order to understand the solution, we need to first understand basics of coding theory. Unlike envelopes when we send messages over a wire, or through air (wireless) it is possible that the messages we send get corrupted. Specifically if we are sending “Hello” it can end up becoming “Hgllo” where e became g because of some electric/electromagnetic distortion. If we typed incorrectly using our mobile keyboards it would have auto corrected “Hgllo” to “Hello”. This is possible, because our natural languages have a lot of redundant information. Otherwise if we lost one word in a conversation whole sentence would not make sense.
Coding theory is about how do we systematically put redundant information so that computers can correct for errors. We are only going to talk about one type of error, where a letter gets replaced by another letter. If ABCDEF becomes GBCDEH ; here two letters are wrong, and we say they have a distance of 2. What will happen if we simply repeat each letter? Message ABC becomes AABBCC. Now if a single error happens GABBCC we know original message got corrupted. We still can’t figure out what the original message is. It could be either ABC or GBC. So a better technique would be to repeat each letter thrice. Message ABC becomes AAABBBCCC. Now if a single error happens GAABBBCCC, then we can say if there is only one error then the correct message is ABC by taking majority. Note that this cannot correct for two errors because if you see GGABBBCCC then you will think real message is GBC.
What if the message has 4 letters, ABCD becomes AAABBBCCCDDD? This scheme requires you to send 12 letters just to correct 1 error in 4 letters. We denote this by saying [12, 4, 1] — where 12 is the coding distance, 4 is the original message length, and 1 is the number of letters we can correct for.
For now let us temporarily restrict our focus to binary letters (0, 1). This is because in computers we communicate purely using 0s and 1s. For example letter A would be represented by a number 65, which would be represented as 0100 0001 an 8-bit message, or simply called a byte.
It turns out if we want to send 4 bits, then using 12 bits for it is actually not very efficient. You could do better. In fact just 7 bits is enough to represent 4 bits where you can correct for a single error. This is called the hamming code.
Let us understand why our repeating scheme works in more detail. Consider any two valid code sequence, for example AAABBBCCCDDD and AAAAEEECCCDDD, even though only one character changed in the input, 3 letters changed in the output. So if we make sure distance between any two coded sentences is 3 then we can correct for one error.
There are 16 possible words using 4 bits (0000, 0001, 0010, … 1111). If we can assign code words for each of them such that distance between any two is 3, then we can say that it can correct for 1 error. Hamming code essentially achieves it. If are encoding ABCD, then first 4 bits is the same as the the input viz ABCD, 5th bit is A + B + D; 6th bit is A + C + D, and 7th bit is B + C + D. Of course sum like 3 will be replaced by 1 again by taking remainder to 2. Such codes are also called linear codes, where each bit can be written as linear combination of input bits. You can read chapter 5 of Jiri Matousek’s 33 Miniatures to see a quick proof on why hamming code is indeed a valid code using a bit of linear algebra. Or you can simply write down 16 of those codes, and check every two combination and make sure they differ in at least three places. (You will need to compare 120 pairs though). http://kam.mff.cuni.cz/~matousek/stml-53-matousek-1.pdf
What if we want to correct for more than 1 error? Golay code is one such code. G23 and G24 are two codes that encodes 12 bits in 23 and 24 bits respectively. In the first case hamming distance is 7 and in second case it is 8. There is no reason to have a distance 8, distance 7 is enough because
Imagine two codes valid codes (x0) and (x7) which are 7 distances apart. Then using 7 changes you can reach from x0 to x7.
(x0) — x1 — x2 — x3* — x4* — x5 — x6 — (x7)
Even if (x0) makes 3 errors, and (x7) makes 3 errors you will only end up with (x3) or (x4). And never get confused. So if we want to correct for k errors then hamming distance between any two code words must be 2k + 1 or higher.
It turns out that G23 is not just a good code, it is a “perfect” code. Meaning if we want to send 12 bits, with 3 bits of correction, then you must need 23 bits.
Now back to our problem, you have 23 answers in front of you, assume it is G23 code and decode back to 12 bits. You get up at a time to represent these 12 bits. We know that when your friend encodes this 12 bits back to G23 code then at most 3 bits will be out of place. In other words you friend will get at most 3 answers wrong. Hence getting at least 20 of them correct.
Now relationship with String theory requires a bigger write up. But essentially it is because the internal structure of field theory on some kind of donut shaped universe, the type of transformations you can do there will look something like the type of transformations you can do on the Golay codes.