Audio data compression

From Wikipedia, the free encyclopedia.

Jump to: navigation, search

Note: This article is about audio data compression, which reduces the data rate of digital audio signals. This should not be confused with audio level compression which reduces the dynamic range of audio signals, or companding, which uses both compression and complementary dynamic range expansion as a noise reduction technique.


Audio compression is a form of data compression designed to reduce the size of audio data files. Audio compression algorithms are typically referred to as audio codecs. As with other specific forms of data compression, there exist many "lossless" and "lossy" algorithms to achieve the compression effect.

Contents

Lossless compression

As with image compression, both lossy and lossless compression algorithms are used in audio compression. As file storage and communications bandwidth have become less expensive and more available, the popularity of lossless formats such as FLAC has increased sharply, as people are choosing to maintain a permanent archive of their audio files. The primary users of lossless compression are audio engineers, audiophiles and those consumers who want to preserve the full quality of their audio files, in contrast to the quality loss from lossy compression techniques such as Vorbis and MP3. Of course, virtually every user will use both schemes for some files, or maintain both lossy and lossless versions, as their needs require.

It is difficult to maintain all the data in an audio stream and achieve substantial compression. First, the vast majority of sound recordings are highly complex and random, recorded from the real world. As one of the key methods of compression is to find patterns and repetition, more random data such as audio doesn't compress well. In a similar manner, photographs compress less efficiently with lossless methods than simpler computer-generated images do. But interestingly, even computer generated sounds can contain very complicated waveforms that present a challenge to many compression algorithms. This is due to the nature of audio waveforms, which are generally difficult to simplify without a (necessarily lossy) conversion to frequency information, as performed by the human ear.

The second reason is that values of audio samples change very quickly, so generic data compression algorithms don't work well for audio, and strings of consecutive bytes don't generally appear very often. However, convolution with the filter [-1 1] (that is, taking the first difference) tends to slightly whiten (decorrelate, make flat) the spectrum, thereby allowing traditional lossless compression at the encoder to do its job; integration at the decoder restores the original signal. Codecs such as FLAC, Shorten and TTA use linear prediction to estimate the spectrum of the signal. At the encoder, the estimator's inverse is used to whiten the signal by removing spectral peaks while the estimator is used to reconstruct the original signal at the decoder.

Lossless audio codecs have no quality issues, so the usability can be estimated by

  • Speed of compression and decompression
  • Degree of compression
  • Software and hardware support

For comparisons of lossless audio codecs, see hydrogenaudio.org wiki comparison; Speek's comparison (note the other links as well); this graph from Hans Heiden's site and Robin Whittle's 2003 comparison of several algorithms and discussion of Rice coding.

Lossy compression

Lossy audio compression is used in an extremely wide range of applications. In addition to the direct applications (mp3 players or computers), digitally compressed audio streams are used in most video DVDs; digital television; streaming media on the internet; satellite and cable radio; and increasingly in terrestrial radio broadcasts. Lossy compression typically achieves far greater compression than lossless compression (data of 5-20% of the original stream, rather than 50-60%), by simplifying the complexities of the data. Given that bandwidth and storage are always limited, the trade-off of reduced audio quality is clearly outweighed for some applications where users wish to transmit or store more information. (That is, one can fit a lot more songs on their iPod using lossy than using lossless compression.)

In both lossy and lossless compression, information redundancy is reduced, using methods such as coding, pattern recognition and linear prediction to reduce the amount of information used to describe the data. For example, suppose you wanted to record twenty house numbers along one side of a street, each of which goes up by 2. If the first address was 14461, or five digits, the uncompressed stream would require 20 times 5 bytes, or 100 bytes, to store. You could recode that to take advantage of the repetition and simply say begin at 14461, increase by 2, repeat 19 times. Now the data is losslessly captured in just 8 bytes of data!

The innovation of lossy audio compression was to use psychoacoustics to recognize that not all data in an audio stream is perceived by the human ear. Most lossy compression reduces perceptual redundancy by first identifying sounds which are considered perceptually irrelevant, that is, sounds that are very hard to hear. Typical examples include high frequencies, or sounds that occur at the same time as other louder sounds. Those sounds are coded with decreased accuracy or not coded at all.

To illustrate this by continuing with the example, suppose the data were more complex, so the difference between two house numbers was 4 in one instance, between the tenth and eleventh houses. Lossless coding would require something like this: begin at 14461, increase by 2, repeat 9 times, increase by 4, increase by 2, repeat 8 times. So 10, rather than 8 bytes, are needed to store the data. But if your model of lossy compression determines that difference was not relevant for the application, it might simplify the data to ignore the variation and increase the compression. However, some data is lost in the process, because the original data cannot be reconstructed from the lossy compression scheme; only an approximation of that data, determined to be sufficient for this application, can be recovered.

If reducing perceptual redundancy does not achieve sufficient compression for a particular application, it may require further lossy compression with a difference in quality that can be more readily perceived by a user. Most lossy compression schemes allow compression parameters to be adjusted to achieve a target rate of data, usually expressed as a bit rate. Again, the data reduction will be guided by some model of how important the sound is as perceived by the human ear, with the goal of efficiency and optimized quality for the target data rate. (There are many different models used for this perceptual analysis, some better suited to different types of audio than others.) Hence, depending on the bandwidth and storage requirements, the use of lossy compression may result in a perceived reduction of the audio quality that ranges from none to severe. Of course, that trade-off is usually intentional.

Because data is removed during lossy compression and cannot be recovered by decompression, lossy compression schemes are completely unsuitable for archival storage. Hence, as noted, even those who use lossy compression (for portable audio applications, for example) may keep a losslessly compressed archive for other applications. In addition, the technology of compression continues to advance, and achieving a state-of-the-art lossy compression would require one to begin again with the lossless, original audio data and compress with the new lossy codec. The nature of lossy compression (for both audio and images) results in increasing degradation of quality if data is decompressed, then recompressed using lossy compression. For these and other reasons, lossy compression is not appropriate for many audio applications.

Coding methods

Transform domain methods

In order to determine what information in an audio signal is perceptual irrelevant, most lossy compression algorithms use transforms such as the modified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain. Once transformed, typically into the frequency domain, component frequencies can be allocated bits according to how audible they are. Audibility of spectral components is determined by first calculating a masking threshold, below which it is estimated that sounds will be beyond the limits of human perception.

The masking threshold is calculated using the absolute threshold of hearing and the principles of simultaneous masking - the phenomenon wherein a signal is masked by another signal separated by frequency - and, in some cases, temporal masking - where a signal is masked by another signal separated by time. Equal-loudness contours may also be used to weight the perceptual importance of different components. Models of the human ear-brain combination incorporating such effects are often called psychoacoustic models.

Time domain methods

Other types of lossy compressors, such as the linear predictive coding (LPC) used with speech, are source-based coders. These coders use a model of the sound's generator (such as the human vocal tract with LPC) to whiten the audio signal (i.e., flatten its spectrum) prior to quantization. LP may also be thought of as a basic perceptual coding technique; reconstruction of an audio signal using a linear predictor shapes the coder's quantization noise into the spectrum of the target signal, partially masking it.

Applications

Due to the nature of lossy algorithms, audio quality suffers when a file is decompressed and recompressed (generational losses). This makes lossy-compressed files unsuitable for professional audio engineering applications, such as sound editing and multitrack recording. However, they are very popular with end users (particularly MP3), as a megabyte can store about a minute's worth of music at adequate quality.

Usability

Usability of lossy audio codecs is determined by:

  • Perceived audio quality
  • Compression factor
  • Speed of compression and decompression
  • Inherent latency of algorithm (critical for real-time streaming applications; see below)
  • Software and hardware support

Lossy formats are often used for the distribution of streaming audio, or interactive applications (such as the coding of speech for digital transmission in cell phone networks). In such applications, the data must be decompressed as the data flows, rather than after the entire data stream has been transmitted. Not all audio codecs can be used for streaming applications, and for such applications a codec designed to stream data effectively will usually be chosen.

Latency results from the methods used to encode and decode the data. Some codecs will analyze a longer segment of the data to optimize efficiency, and then code it in a manner that requires a larger segment of data at one time in order to decode. (Often codecs create segments called a "frame" to create discrete data segments for encoding and decoding.) The inherent latency of the coding algorithm can be critical; for example, when there is two-way transmission of data, such as with a telephone conversation, significant delays may seriously degrade the perceived quality.

In contrast to the speed of compression, which is proportional to the number of operations required by the algorithm, here latency refers to the number of samples which must be analysed before a block of audio is processed. In the minimum case, latency is 0 zero samples (e.g., if the coder/decoder simply reduces the number of bits used to quantize the signal). Time domain algorithms such as LPC also often have low latencies, hence their popularity in speech coding for telephony. In algorithms such as MP3, however, a large number of samples have to be analyzed in order to implement a psychoacoustic model in the frequency domain, and latency is on the order of 23 ms (46 ms for two-way communication).

Speech encoding

Speech encoding is an important category of audio data compression. The perceptual models used to estimate what a human ear can hear are generally somewhat different from those used for music. The range of frequencies needed to convey the sounds of a human voice are normally far narrower than that needed for music, and the sound is normally less complex. As a result, speech can be encoded at high quality using relatively low bit rates.

This is accomplished, in general, by some combination of two approaches:

  • Only encoding sounds that could be made by a single human voice.
  • Throwing away more of the data in the signal -- keeping just enough to reconstruct an "intelligible" voice rather than the full frequency range of human hearing.

Perhaps the earliest algorithms used in speech encoding (and audio data compression in general) were the A-law algorithm and the Mu-law algorithm.

See also

Personal tools