<< Chapter < Page | Chapter >> Page > |
Because of the AEP [link] ,
Therefore, for a typical input .
Recall that the interval length is and so the probability that an interval cannot be found in the history is
For a long enough interval, this probability goes to zero.
There are many Ziv-Lempel style parsing algorithms [link] , [link] , [link] , and each of the variantshas different details, but the key idea is to find the longest match in a window of length . The length of the match is , where we remind the reader that .
Now, encoding requires bits, and so the per-symbol compression ratio is , which in the limit of large approaches the entropy rate .
However, the encoding of must also desribe its length, and often the symbol that follows the match. These require length , and the normalized (per-symbol) cost is
Therefore, the redundancy of Ziv-Lempel style compression algorithms is proportional to , which is much greater than the that we have seen for parametric sources. The fundamental reason why the redundancy is greater is that the class of non-parametric sources is much richer. Detailedredundancy analyses appear in a series of papers by Savari (c.f. [link] ).
The parsing schemes that we have seen can also be adapted to lossy compression. Let us describe several approaches along these lines.
Fixed length: The first scheme, due to Gupta et al. [link] , constructs a codebook of size codewords, where is the length of the phrase being matched and is the rate distortion function. The algorithm cannot search for perfect matches of the phrase, because this is lossy compression. Instead, it seeks the codeword that matches our input phrase most closely. It turns out that for large the expected distortion of the lossy match will be approximately per symbol.
Variable length: Another approach, due to Gioran and Kontoyiannis [link] , constructs a single long database string, and searches for the longest match whose distortion w.r.t. the input is approximately ; the location and length of the approximate match are encoded. Seeing that the database is of length , encoding the location requires bits, and the -match (a match with distortion w.r.t. the input string) is typically of length , giving a per-symbol rate of bits.
An advantage of the latter scheme by Gioran and Kontoyiannis [link] is reduced memory use. The database is a string of length , instead of a codebook comprised of codewords, each of length . On the other hand, the Gupta et al. algorithm [link] has better performance, because it does not need to spend bits per phrase to describe its length. An improved algorithm, dubbed the hybrid algorithm by Gioran and Kontoyiannis, constructs a single database and performs fixed length coding for the best match of length in the database. Therefore, it combines the memory usage of a single database approach with the performance of fixed length coding.
Notification Switch
Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?