<< Chapter < Page | Chapter >> Page > |
Quantization is a highly nonlinear process and is very difficult to analyze precisely. Approximations and assumptions are madeto make analysis tractable.
The roundoff or truncation errors at any point in a system at each time are random , stationary , and statistically independent (white and independent of all other quantizers in a system).
That is, the error autocorrelation function is . Intuitively, and confirmed experimentally in some (but notall!) cases, one expects the quantization error to have a uniform distribution over the interval for rounding, or for truncation.
In this case, rounding has zero mean and variance and truncation has the statistics
Please note that the independence assumption may be very bad (for example, when quantizing a sinusoid with an integerperiod ). There is another quantizing scheme called dithering , in which the values are randomly assigned to nearby quantizationlevels. This can be (and often is) implemented by adding a small (one- or two-bit) random input to the signal before atruncation or rounding quantizer. This is used extensively in practice. Altough the overallerror is somewhat higher, it is spread evenly over all frequencies, rather than being concentrated in spectrallines. This is very important when quantizing sinusoidal or other periodic signals, for example.
Pretend that the quantization error is really additive Gaussian noise with the same mean and variance as the uniform quantizer. That is, model This model is a linear system, which our standard theory can handle easily. We model the noise asGaussian because it remains Gaussian after passing through filters, so analysis in a system context is tractable.
Notification Switch
Would you like to follow the 'Digital filter structures and quantization error analysis' conversation and receive update notifications?