-
Home
- An introduction to source-coding
- Transform coding
- Performance
Here we analyze the optimal reconstruction error for transform coding. As the number of channels grows to infinity, the performance gain over PCM is shown to depend on the spectral flatness measure. Meanwhile, the performance of transform coding with an infinite number of channels is shown to equal that of DPCM with an infinite-length predictor. However, when the DPCM predictor length is equal to the number of transform coding channels, we show that DPCM always yields better performance.
- For an
transform coder,
Equation 1 from "Gain over PCM" presented an
expression for the reconstruction error variance
written in terms of the quantizer
input variances
.
Noting the
N -dependence on
in
Equation 1 from "Gain over PCM" and rewriting it as
,
a reasonable question might be: What is
as
?
- When using the KLT, we know that
where
λ
k denotes the
eigenvalue of
.
If we plug these
into
Equation 1 from "Gain over PCM" , we get
Writing
and using the Toeplitz Distribution Theorem (see Grenander&Szego)
with
, we find that
where
denotes the spectral flatness measure of
,
redefined below for convenience:
Thus, with optimal transform and optimal bit allocation, asymptotic
gain over uniformly quantized PCM is
- Recall that, for the optimal DPCM system,
where we assumed that the signal applied to DPCM quantizer
is distributed similarly to the signal applied to PCM quantizerand where
denotes the prediction
error variance resulting from use of the optimal infinite-lengthlinear predictor:
Making this latter assumption for the transform coder (implying
) and plugging in
yields the following asymptotic result:
In other words, transform coding with infinite-dimensional optimal
transformation and optimal bit allocation performs equivalently toDPCM with infinite-length optimal linear prediction.
Finite-dimensional analysis: comparison to dpcm
- The fact that optimal transform coding performs as well as
DPCM in the limiting case does not tell us the relative performanceof these methods at practical levels of implementation, e.g., when
transform dimension and predictor length are equal and
.
Below we compare the reconstruction error variances of TC and DPCMwhen the transform dimension
equals the predictor length.
Recalling that
and
where
denotes the
autocorrelation
matrix of
, we find
Recursively applying the equations above, we find
which means that we can write
If in the previously derived TC reconstruction error variance expression
we assume that
and apply the eigenvalue property
, the TC gain over PCM becomes
The strict inequality follows from the fact that
is monotonically increasing with
k .
To summarize, DPCM with optimal length-
N prediction performs better
than TC with optimal
transformation and optimal bit
allocation for any finite value of
N .
There is an intuitive explanation for this:the propagation of memory in the DPCM prediction loop makes
the
effective memory of DPCM greater than
N , while in TC the
effective memory is exactly
N .
Source:
OpenStax, An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding. OpenStax CNX. Sep 25, 2009 Download for free at http://cnx.org/content/col11121/1.2
Google Play and the Google Play logo are trademarks of Google Inc.