Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Erasure Codes: An Introduction

I briefly mentioned erasure coding in my RAID Redefined post as one of the technologies that vendors and some in the storage chattering class have promoted as a replacement for the RAID technologies that have protected our data for decades. A reader emailed me to ask if I would expound on erasure coding.

Being a community-oriented pundit, I prefer such requests be made in the comments on Network Computing, so future generations, or this humble reporter after a few adult beverages, can reference the conversation later. Nonetheless, I figured an explanation of erasure coding and its applicability to common storage application would be in order.

The methods we colloquially call erasure codes are a form of forward error correction (FEC). FEC was first used on the unreliable, high-latency, low-bandwidth connections between spacecraft and their controlling ground stations. The communications protocols we commonly use here on Earth, like TCP, rely on the receiving station to detect errors and request a retransmission from the transmitter.

Error detection and retransmission will work well when error rates and latencies are low, but they become quite inefficient when talking to a Mars rover; the round-trip delay is 6-40 minutes, depending on where Earth and Mars are in their respective orbits. Imagine how much data would have to be retransmitted if a simple protocol sent a message that said, "I missed everything after packet 6767," when there was several minutes of data in flight.

Erasure codes are designed to include sufficient redundant information in the transmission stream for the receiver to allow the recover to rebuild the data, even when multiple packets are lost or corrupted in transmission.

They're particularly important in those situations where the retransmission of lost data is impractical, such as broadcasts or multicasts. After all, if the Amazon drone delivering your next-door neighbor's pre-Black Friday purchases blocks your DirecTV dish's view of the satellite just as Odell Beckham Jr. makes the greatest NFL catch ever, you'd miss it, and you'd just have to live with the replays on ESPN.

From an information theory point of view, a storage system looks a lot like that DirecTV transmission with the RAID stripe on each piece of media comparable to a data packet in the transmission stream. When an application tries to read from the media, like the DirecTV receiver, it can't ask the application that wrote the data seconds, or years, ago to resend any data blocks it can't read.

By the strictest definitions, even the single- and double-parity protection schemes used by RAID5 and RAID6 are erasure codes allowing data to be reconstructed if one or two data blocks are unreadable or, in information theory speak, erased. In fact, many RAID-6 implementations use a simple case of the Reed-Solomon coding scheme. In common usage, however, the term "erasure coding" refers to more resilient encoding schemes that can reliably read data when failures mount beyond one or two.

When applied to data protection, an erasure code takes a relatively large data block and mathematically manipulates it to create multiple smaller encoded blocks to allow the decoding process to recover the data from a partial set of encoded blocks. These algorithms allow the system administrator to define two parameters: the number of encoded blocks to create and the number of encoded blocks that will be required to reassemble the original data.

Being somewhat mathematically challenged, it took me seven semesters to pass three semesters of calculus in college. I like to think of erasure coded data as being similar to the algebra problems where you have to solve x equations in y unknowns. As long as each equation includes all the variables and you have as many equations as variables, the problem can be solved.

For example, if we choose an encoding system that creates 12 encoded blocks where eight are required, that would be the equivalent of 12 equations with eight unknowns, regardless of whether the system uses Reed-Solomon encoding or more modern schemes like the low-density parity check (LDPC) codes used for error correction in SSDs and Turbo Codes used for LTE data transmission.

Unlike in simple parity schemes, the encoded blocks of erasure coded data aren't segregated into data blocks and ECC blocks, and erasure coding schemes may have more storage overhead than the simple ratio of total blocks to number of blocks required to recover the data may imply. The eight-of-12 scheme in the above example could have more overhead than the 50% increase implied by the numbers of blocks. This allows vendors to trade storage efficiency for the amount of compute power required to encode or decode the data.

In my next post, I'll look at where erasure codes are a good replacement for RAID, and even multi-site replication. I'll also examine where they're a bad idea and how today's erasure coded systems can improve.