Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Hamming code
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Description== If more error-correcting bits are included with a message, and if those bits can be arranged such that different incorrect bits produce different error results, then bad bits could be identified. In a seven-bit message, there are seven possible single bit errors, so three error control bits could potentially specify not only that an error occurred but also which bit caused the error. Hamming studied the existing coding schemes, including two-of-five, and generalized their concepts. To start with, he developed a [[wikt:nomenclature|nomenclature]] to describe the system, including the number of data bits and error-correction bits in a block. For instance, parity includes a single bit for any data word, so assuming [[ASCII]] words with seven bits, Hamming described this as an ''(8,7)'' code, with eight bits in total, of which seven are data. The repetition example would be ''(3,1)'', following the same logic. The [[code rate]] is the second number divided by the first, for our repetition example, 1/3. Hamming also noticed the problems with flipping two or more bits, and described this as the "distance" (it is now called the ''[[Hamming distance]]'', after him). Parity has a distance of 2, so one bit flip can be detected but not corrected, and any two bit flips will be invisible. The (3,1) repetition has a distance of 3, as three bits need to be flipped in the same triple to obtain another code word with no visible errors. It can correct one-bit errors or it can detect - but not correct - two-bit errors. A (4,1) repetition (each bit is repeated four times) has a distance of 4, so flipping three bits can be detected, but not corrected. When three bits flip in the same group there can be situations where attempting to correct will produce the wrong code word. In general, a code with distance ''k'' can detect but not correct {{nowrap|''k'' β 1}} errors. Hamming was interested in two problems at once: increasing the distance as much as possible, while at the same time increasing the code rate as much as possible. During the 1940s he developed several encoding schemes that were dramatic improvements on existing codes. The key to all of his systems was to have the parity bits overlap, such that they managed to check each other as well as the data. ===General algorithm=== The following general algorithm generates a single-error correcting (SEC) code for any number of bits. The main idea is to choose the error-correcting bits such that the index-XOR (the [[XOR]] of all the bit positions containing a 1) is 0. We use positions 1, 10, 100, etc. (in binary) as the error-correcting bits, which guarantees it is possible to set the error-correcting bits so that the index-XOR of the whole message is 0. If the receiver receives a string with index-XOR 0, they can conclude there were no corruptions, and otherwise, the index-XOR indicates the index of the corrupted bit. An algorithm can be deduced from the following description: # Number the bits starting from 1: bit 1, 2, 3, 4, 5, 6, 7, etc. # Write the bit numbers in binary: 1, 10, 11, 100, 101, 110, 111, etc. # All bit positions that are powers of two (have a single 1 bit in the binary form of their position) are parity bits: 1, 2, 4, 8, etc. (1, 10, 100, 1000) # All other bit positions, with two or more 1 bits in the binary form of their position, are data bits. # Each data bit is included in a unique set of 2 or more parity bits, as determined by the binary form of its bit position. ## Parity bit 1 covers all bit positions which have the '''least''' significant bit set: bit 1 (the parity bit itself), 3, 5, 7, 9, etc. ## Parity bit 2 covers all bit positions which have the '''second''' least significant bit set: bits 2β3, 6β7, 10β11, etc. ## Parity bit 4 covers all bit positions which have the '''third''' least significant bit set: bits 4β7, 12β15, 20β23, etc. ## Parity bit 8 covers all bit positions which have the '''fourth''' least significant bit set: bits 8β15, 24β31, 40β47, etc. ## In general each parity bit covers all bits where the bitwise AND of the parity position and the bit position is non-zero. If a byte of data to be encoded is 10011010, then the data word (using _ to represent the parity bits) would be __1_001_1010, and the code word is 011100101010. The choice of the parity, even or odd, is irrelevant but the same choice must be used for both encoding and decoding. This general rule can be shown visually: :{| class="wikitable" style="text-align:center;" |- !colspan="2"| <span style="color:#888">Bit position</span> ! <span style="color:#888">1</span> !! <span style="color:#888">2</span> !! <span style="color:#888">3</span> !! <span style="color:#888">4</span> !! <span style="color:#888">5</span> !! <span style="color:#888">6</span> !! <span style="color:#888">7</span> !! <span style="color:#888">8</span> !! <span style="color:#888">9</span> !! <span style="color:#888">10</span> !! <span style="color:#888">11</span> !! <span style="color:#888">12</span> !! <span style="color:#888">13</span> !! <span style="color:#888">14</span> !! <span style="color:#888">15</span> !! <span style="color:#888">16</span> !! <span style="color:#888">17</span> !! <span style="color:#888">18</span> !! <span style="color:#888">19</span> !! <span style="color:#888">20</span> |rowspan="7"| ... |- !colspan="2"| Encoded data bits !style="background-color: #90FF90;"| p1 !style="background-color: #90FF90;"| p2 !! d1 !style="background-color: #90FF90;"| p4 !! d2 !! d3 !! d4 !style="background-color: #90FF90;"| p8 !! d5 !! d6 !! d7 !! d8 !! d9 !! d10 !! d11 !style="background-color: #90FF90;"| p16 !! d12 !! d13 !! d14 !! d15 |- !rowspan="5"|Parity<br />bit<br />coverage !style="background-color: #90FF90;"| p1 | {{ya}} || || {{ya}} || || {{ya}} || || {{ya}} || || {{ya}} || || {{ya}} || || {{ya}} || || {{ya}} || || {{ya}} || || {{ya}} || |- !style="background-color: #90FF90;"| p2 | || {{ya}} || {{ya}} || || || {{ya}} || {{ya}} || || || {{ya}} || {{ya}} || || || {{ya}} || {{ya}} || || || {{ya}} || {{ya}} || |- !style="background-color: #90FF90;"| p4 | || || || {{ya}} || {{ya}} || {{ya}} || {{ya}} || || || || || {{ya}} || {{ya}} || {{ya}} || {{ya}} || || || || || {{ya}} |- !style="background-color: #90FF90;"| p8 | || || || || || || || {{ya}} || {{ya}} || {{ya}} || {{ya}} || {{ya}} || {{ya}} || {{ya}} || {{ya}} || || || || || |- !style="background-color: #90FF90;"| p16 | || || || || || || || || || || || || || || || {{ya}} || {{ya}} || {{ya}} || {{ya}} || {{ya}} |} Shown are only 20 encoded bits (5 parity, 15 data) but the pattern continues indefinitely. The key thing about Hamming codes that can be seen from visual inspection is that any given bit is included in a unique set of parity bits. To check for errors, check all of the parity bits. The pattern of errors, called the [[error syndrome]], identifies the bit in error. If all parity bits are correct, there is no error. Otherwise, the sum of the positions of the erroneous parity bits identifies the erroneous bit. For example, if the parity bits in positions 1, 2 and 8 indicate an error, then bit 1+2+8=11 is in error. If only one parity bit indicates an error, the parity bit itself is in error. With {{mvar|m}} parity bits, bits from 1 up to <math>2^m-1</math> can be covered. After discounting the parity bits, <math>2^m-m-1</math> bits remain for use as data. As {{mvar|m}} varies, we get all the possible Hamming codes: {| class="wikitable" style="text-align:right" |- ! Parity bits !! [[Block code#The block length n|Total bits]] !! [[Block code#The message length k|Data bits]] !! Name !! [[Block code#The rate R|Rate]] |- | 2 || 3 || 1|| Hamming(3,1)<br/>(Triple [[repetition code]]) || 1/3 β 0.333 |- | 3 || 7 || 4 || [[Hamming(7,4)]] || 4/7 β 0.571 |- | 4 || 15 || 11 || Hamming(15,11) || 11/15 β 0.733 |- | 5 || 31 || 26 ||Hamming(31,26) || 26/31 β 0.839 |- | 6 || 63 || 57 ||Hamming(63,57) || 57/63 β 0.905 |- | 7 || 127 || 120 ||Hamming(127,120) || 120/127 β 0.945 |- | 8 || 255 || 247 ||Hamming(255,247) || 247/255 β 0.969 |- | 9 || 511 || 502 ||Hamming(511,502) ||502/511 β 0.982 |- | colspan="5" | ... |- |{{mvar|m}} |<math>n=2^m-1</math> |<math>k=2^m-m-1</math> |Hamming<math>(2^m-1,2^m-m-1)</math> |<math>(2^m - m - 1)/(2^m-1)</math> |}
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Hamming code
(section)
Add topic