Using rate-distortion theory, the optimal SNR attainable for rate-R source coding is related to R and the spectral flatness measure of the source. The SNR of rate-R DPCM is then analyzed and compared to the optimal, and shown to suffer by only 1.53 dB.
The
rate-distortion function
specifies the minimum average
rate
R required to transmit the source process at a mean distortion of
D , while the
distortion-rate function
specifies the
minimum mean distortion
D resulting from transmission of the source
at average rate
R .
These bounds are theoretical in the sense that coding techniques whichattain these minimum rates or distortions are in general unknown and
thought to be infinitely complex as well as require infinite memory.Still, these bounds form a reference against which any specific coding
system can be compared.For a continuous-amplitude white (i.e., “memoryless”) Gaussian
source
(see Berger and Jayant&Noll),
The sources we are interested in, however, are non-white.
It turns out that when distortion
D is “small,” non-white Gaussian
have the following distortion-rate function: (see page 644 of Jayant&Noll)
Note the ratio of geometric to arithmetic PSD means, called the
spectral flatness measure .
Thus optimal coding of
yields
To summarize,
[link] (lower equation) gives the best possible SNR for
any arbitrarily-complex coding system that transmits/stores information
at an average rate of
R bits/sample.
Let's compare the SNR-versus-rate performance acheivable by DPCM to
the optimal given by
[link] (lower equation).
The structure we consider is shown in
[link] , where
quantized DPCM outputs
are coded into binary bits
using an entropy coder.Assuming that
is white (which is a good assumption
for well-designed predictors), optimal entropy coding/decoding is ableto transmit and recover
at
bits/sample without any distortion.
is the entropy of
, for which we
derived the following expression assuming large-
L uniform
quantizer:
Since
in DPCM,
can be rewritten:
If
is Gaussian, it can be shown that the differential entropy
h
e takes on the value
so that
Using
and rearranging the previous expression, we find
To summarize, a DPCM system using a MSE-optimal infinite-length
predictor and optimal entropy coding of
could operate
at an average of
R bits/sample with the SNR in
[link] (lower equation).
Comparing
[link] (lower equation) and
[link] (lower equation), we see that
DPCM incurs a 1.5 dB penalty in SNR when compared to the optimal.From our previous discussion on optimal quantization,
we recognize that this 1.5 dB penalty comes from the fact that thequantizer in the DPCM system is memoryless.
(Note that the DPCM quantizer
must be memoryless since the
predictor input must not be delayed.)
Though we have identified a 1.5 dB DPCM penalty with respect to optimal,
a key point to keep in mind is that the design of near-optimalcoders for
non-white signals is extremely difficult.
When the signal statistics are rapidly changing, such a design taskbecomes nearly impossible.
Though still non-trivial to design, near-optimal entropy coders for
white signals exist and are widely used in practice.
Thus, DPCM can be thought of as a way of pre-processing a colored signalthat makes near-optimal coding possible.
From this viewpoint, 1.5 dB might not be considered a high price to pay.
Receive real-time job alerts and never miss the right job again
Source:
OpenStax, An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding. OpenStax CNX. Sep 25, 2009 Download for free at http://cnx.org/content/col11121/1.2
Google Play and the Google Play logo are trademarks of Google Inc.
Notification Switch
Would you like to follow the 'An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding' conversation and receive update notifications?