<< Chapter < Page | Chapter >> Page > |
Claim 2 The probability of the type class obeys,
Consider now an event that is a union over . Suppose , then is rare with respect to (w.r.t) the prior . and we have . That is, the probability is concentrated around . In general, the probability assigned by the prior to an event satisfies
where we denote when .
Fixed to fixed length source coding : As before, we have a sequence of length , and each element of is from the alphabet . A source code maps the input to a set of bit vectors, each of length . The rate quantifies the number of output bits of the code per input element of . We assume without loss of generality that . If not, then we can round up to , where denotes rounding up. That is, the output of the code consists of bits. If and is fixed, then we call this a fixed to fixed length source code.
The decoder processes the bits and yields . Ideally we have that , but if then there are inputs that are not mapped to any output, and may differ from . Therefore, we want to be small. If is too small, then the error probability will go to 1. On the other hand, sufficiently large will drive this error probability to 0 as is increased.
If and is vanishing as is increased, then we are compressing, because , where is the number of possible inputs and there are possible outputs.
What is a good fixed to fixed length source code? One option is to map outputs to inputs with high probabilities, and the last output can be mapped to a “don't care" input.We will discuss the performance of this style of code.
An input is called -typical if . We denote the set of -typical inputs by , this set includes the type classes whose empirical probabilities are equal (or closest) to the true prior . Note that for each type class , all inputs in the type class have the same probability, i.e., . Therefore, the set is a union of type classes, and can be thought of as an event ( [link] ) that contains type classes consisting of high-probability sequences. It is easily seen that the event contains the true i.i.d. distribution , because sequences whose empirical probabilities satisfy also satisfy
Using the principles discussed in [link] , it is readily seen that the probability under the prior of the inputs in satisfies when . Therefore, a code that enumerates will encode correctly with high probability.
The key question is the size of , or the cardinality of . Because each satisfies , and , we have . Therefore, a rate allows near-lossless coding , because the probability of error vanishes(recall that , where denotes the complement).
On the other hand, a rate will not allow lossless coding, and the probability of error will go to 1. We will see this intuitively. Because the type class whose empirical probability is dominates, a type class whose sequences have larger probability, e.g., , will have small probability in aggregate. That is,
In words, choosing a code with rate that contains the words with highest probability will fail, it will not cover enough probabilistic mass.We conclude that near-lossless coding is possible at rates above H and impossible below H.
To see things from a more intuitive angle, consider the definition of entropy, . If we consider each bit as reducing uncertainty by a factor of 2,then the average log-likelihood of a length- input generated by satisfies
Because the expected log-likelihood of is , it will take bits to reduce the uncertainty by this factor.
Fixed to variable length source coding : The near-lossless coding above relies on enumerating a collection of high-probability codewords . However, this approach suffers from a troubling failure for . To solve this problem, we incorporate a code that maps to an output consisting of a variable number of bits. That is, the length of the code will be approximately on average, but could be greater or lesser.
One possible variable length code is due to Shannon. Consider all possible . For each , allocate bits to . It can be shown that it is possible to construct an invertible (uniquely decodable)code as long as the length of the code in bits allocated to each satisfies
This result is known as the Kraft Inequality. Seeing that
we see that the length allocation we suggested satisfies the Kraft Inequality. Therefore, it is possible to construct an invertible (and hence lossless) codewith lengths upper bounded by
and we have
This simple construction approaches the entropy up to 1 bit.
Unfortunately, a Shannon code is impractical, because it requires to construct a code book of exponential size . Instead, arithmetic codes [link] are used; we discussed arithmetic codes in detail in class, but they appear in all standard text books and so we do not describe them here.
Notification Switch
Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?