-
Home
- An introduction to source-coding
- Dpcm
- Performance of dpcm
Here we characterize the performance of DPCM via the simpler surrogate known as "quantized predictive encoding", which is known to have very similar performance in practice. To do this, we derive the optimum prediction coefficients, the resulting prediction error variance and gain over PCM.
- As we noted earlier, the DPCM performance gain is a consequence of
variance reduction obtained through prediction.Here we derive the optimal predictor coefficients, prediction error
variance, and bit rate for the system in
figure 4 from Differential Pulse Code Modulation .
This system is easier to analyze than DPCM systems with quantizerin loop (e.g.,
figure 5 from Differential Pulse Code Modulation ) and it is said that the difference in
prediction-error behavior is negligible when
(see page 267 of Jayant&Noll).
-
Optimal Prediction Coefficients: First we find coefficients
h minimizing prediction error variance:
Throughout, we assume that
is a zero-mean stationary
random process with autocorrelation
A necessary condition for optimality is the following:
where we have used
equation 1 from Differential Pulse Code Modulation .
We can rewrite this as a system of linear equations:
which yields an expression for the optimal prediction coefficients:
-
Error for Length-
N Predictor: The definition
and
[link] can be used to show that the minimum prediction
error variance is
-
Error for Infinite-Length Predictor: We now characterize
as
.
Note that
Using Cramer's rule,
Cramer's rule
Given matrix equation
, where
,
where
denotes determinant.
A result from the theory of Toeplitz determinants (see Jayant&Noll) gives the final answer:
where
is the
power spectral density ofthe WSS random process
:
(Note that, because
is conjugate symmetric for stationary
,
will always be non-negative and real.)
-
ARMA Source Model: If the random process
can be modelled as a
general linear process , i.e., white noise
driving a
causal LTI system
:
then it can be shown that
Thus the MSE-optimal prediction error variance equals that of the
driving noise
when
.
-
Prediction Error Whiteness: We can also demonstrate that the MSE-optimal prediction error is
white when
.
This is a simple fact of the orthogonality principle seen earlier:
The prediction error has autocorrelation
-
AR Source Model: When the input can be modelled as an autoregressive (AR) process of
order
N :
then MSE-optimal
results (i.e.,
and whitening)
may be obtained with a forward predictor of order
N .
Specifically, the prediction coefficients
h
i can be chosen
as
and so the prediction error
becomes
-
Efficiency Gain over PCM: Prediction reduces the variance at the quantizer input without changing
the variance of the reconstructed signal.
- By keeping the number of quantization levels fixed, could
reduce quantization step width and obtain lower quantization errorthan PCM at the same bit rate.
- By keeping the decision levels fixed, could reduce the number
of quantization levels and obtain a lower bit rate than PCM at thesame quantization error level.
Assuming that
and
are distributed similarly, use of
the same style of quantizer on DPCM vs. PCM systems yields
Source:
OpenStax, An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding. OpenStax CNX. Sep 25, 2009 Download for free at http://cnx.org/content/col11121/1.2
Google Play and the Google Play logo are trademarks of Google Inc.