<< Chapter < Page Chapter >> Page >
Here we characterize the performance of DPCM via the simpler surrogate known as "quantized predictive encoding", which is known to have very similar performance in practice. To do this, we derive the optimum prediction coefficients, the resulting prediction error variance and gain over PCM.
  • As we noted earlier, the DPCM performance gain is a consequence of variance reduction obtained through prediction.Here we derive the optimal predictor coefficients, prediction error variance, and bit rate for the system in figure 4 from Differential Pulse Code Modulation . This system is easier to analyze than DPCM systems with quantizerin loop (e.g., figure 5 from Differential Pulse Code Modulation ) and it is said that the difference in prediction-error behavior is negligible when R > 2 (see page 267 of Jayant&Noll).
  • Optimal Prediction Coefficients: First we find coefficients h minimizing prediction error variance:
    min h E { e 2 ( n ) } .
    Throughout, we assume that x ( n ) is a zero-mean stationary random process with autocorrelation
    r x ( k ) : = E { x ( n ) x ( n - k ) } = r x ( - k ) .
    A necessary condition for optimality is the following:
    j { 1 , , N } , 0 = 1 2 h j E { e 2 ( n ) } = E e ( n ) e ( n ) h j = E e ( n ) x ( n - j ) The "Orthogonality Principle" = E x ( n ) - i = 1 N h i x ( n - i ) x ( n - j ) = E x ( n ) x ( n - j ) - i = 1 N h i E x ( n - i ) x ( n - j ) = r x ( j ) - i = 1 N h i r x ( j - i )
    where we have used equation 1 from Differential Pulse Code Modulation . We can rewrite this as a system of linear equations:
    r x ( 1 ) r x ( 2 ) r x ( N ) r x = r x ( 0 ) r x ( 1 ) r x ( N - 1 ) r x ( 1 ) r x ( 0 ) r x ( N - 2 ) r x ( N - 1 ) r x ( N - 2 ) r x ( 0 ) R N h 1 h 2 h N h
    which yields an expression for the optimal prediction coefficients:
h = R N - 1 r x .
  • Error for Length- N Predictor: The definition x ( n ) : = x ( n ) , x ( n - 1 ) , , x ( n - N ) t and [link] can be used to show that the minimum prediction error variance is
    σ e 2 | min , N = E { e 2 ( n ) } = E x t ( n ) 1 - h 2 = 1 - h t E { x ( n ) x t ( n ) } 1 - h = 1 - h t r x ( 0 ) r x t r x R N 1 - h = r x ( 0 ) - 2 h t r x + h t R N h = r x ( 0 ) - r x t R N - 1 r x .
  • Error for Infinite-Length Predictor: We now characterize σ e 2 | min , N as N . Note that
    r x ( 0 ) r x t r x R N R N + 1 1 - h = σ e 2 | min , N 0
    Using Cramer's rule,
    1 = σ e 2 | min , N r x t 0 R N R N + 1 = σ e 2 | min , N R N R N + 1 σ e 2 | min , N = R N + 1 R N .

    Cramer's rule

    Given matrix equation Ay = b , where A = ( a 1 , a 2 , , a N ) R N × N ,

    y k = a 1 , , a k - 1 , b , a k + 1 , , a N A
    where | · | denotes determinant.

    A result from the theory of Toeplitz determinants (see Jayant&Noll) gives the final answer:
    σ e 2 | min = lim N R N + 1 R N = exp 1 2 π - π π ln S x ( e j ω ) d ω
    where S x ( e j ω ) is the power spectral density ofthe WSS random process x ( n ) :
    S x ( e j ω ) : = n = - r x ( n ) e - j ω n .
    (Note that, because r x ( n ) is conjugate symmetric for stationary x ( n ) , S x ( e j ω ) will always be non-negative and real.)
  • ARMA Source Model: If the random process x ( n ) can be modelled as a general linear process , i.e., white noise v ( n ) driving a causal LTI system B ( z ) :
    x ( n ) = v ( n ) + k = 1 b k v ( n - k ) with k | b k | 2 < ,
    then it can be shown that
    σ v 2 = exp 1 2 π - π π ln S x ( e j ω ) d ω .
    Thus the MSE-optimal prediction error variance equals that of the driving noise v ( n ) when N = .
  • Prediction Error Whiteness: We can also demonstrate that the MSE-optimal prediction error is white when N = . This is a simple fact of the orthogonality principle seen earlier:
    0 = E { e ( n ) x ( n - k ) } , k = 1 , 2 , .
    The prediction error has autocorrelation
    E { e ( n ) e ( n - k ) } = E e ( n ) x ( n - k ) + i = 1 h i x ( n - k - i ) = E { e ( n ) x ( n - k ) } 0 for k > 0 + i = 1 h i E { e ( n ) x ( n - k - i ) } 0 = σ e 2 | min δ ( k ) .
  • AR Source Model: When the input can be modelled as an autoregressive (AR) process of order N :
    X ( z ) = 1 1 + a 1 z - 1 + a 2 z - 2 + + a N z - N V ( z ) ,
    then MSE-optimal results (i.e., σ e 2 = σ e 2 | min and whitening) may be obtained with a forward predictor of order N . Specifically, the prediction coefficients h i can be chosen as h i = a i and so the prediction error E ( z ) becomes
    E ( z ) = 1 - H ( z ) X ( z ) = 1 + a 1 z - 1 + a 2 z - 2 + + a N z - N 1 + a 1 z - 1 + a 2 z - 2 + + a N z - N V ( z ) = V ( z ) ,
  • Efficiency Gain over PCM: Prediction reduces the variance at the quantizer input without changing the variance of the reconstructed signal.
    • By keeping the number of quantization levels fixed, could reduce quantization step width and obtain lower quantization errorthan PCM at the same bit rate.
    • By keeping the decision levels fixed, could reduce the number of quantization levels and obtain a lower bit rate than PCM at thesame quantization error level.
    Assuming that x ( n ) and e ( n ) are distributed similarly, use of the same style of quantizer on DPCM vs. PCM systems yields
    SNR DPCM = SNR PCM + 10 log 10 σ x 2 σ e 2 .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding. OpenStax CNX. Sep 25, 2009 Download for free at http://cnx.org/content/col11121/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding' conversation and receive update notifications?

Ask