One question that I often am asked is "How many parity bytes are consumed" for various ECC implementations. To parallel the previous post, this entry will address parity sizes for Hamming, Reed/Solomon, and BCH implementations.
## Hamming

Block Hamming codes correct only a single error and require 2*log2(n) of parity for a data block with n data bits. Block Hamming codes are capable of correcting a single-bit error and detecting most double-bit errors.

ExampleA 512B data block consists of 4096 (2^12) bits, thus a Hamming code requires 24 parity bits.

## Reed-Solomon

Parity required for an RS code depends on the symbol size, Galois field size (GF), and ECC level provided by the code. There are trade-offs in selecting the most efficient symbol size for a given application, but generally the idea is to minimize the Galois Field size for a given blocksize by selecting an appropriate symbol size. The general formula is 2*GF*ECC.

ExampleAn RS over a 512B data block consisting of 9-bit symbols capable of supporting 8 symbol correction would require 2*9*8, or 144 bits of parity.

## BCH

Parity required for BCH is dependent on the Galois field size (GF) (determined by the data block size) and the ECC correction level. The number of parity bits can be computed as GF*ECC.

Example The following table shows the number of ECC bits/bytes required per correction block.

Blocksize | ECC Level | ECC Bits | ECC Bytes |

512B |
ECC 8 |
13*8=104 |
13 |

512B |
ECC 16 |
13*16=208 |
26 |

1024B |
ECC 24 |
14*24=336 |
42 |

1024B |
ECC 40 |
14*40=560 |
70 |

While the GF field for BCH is larger than that for Reed-Solomon, the factor of 2 in the RS equation makes BCH more efficient for NAND applications since errors tend to not occur in groups (RS is better at these applications).

The move to larger block sizes also makes the BCH code more efficient since higher correction levels more than compensate for the additional bytes protected (see ECC Trends whitepaper).

]]>