<< Chapter < Page | Chapter >> Page > |
As a consequence of the structure of y, it is easy to see that it can be compressed with the following redundancy,
where the new term arises from coding the locations of transitions between segments (states of the tree) in the BWT output. Not only is the BWT convenient for compression, but it is amenable to fast computation. Both the BWT and its inverse can be implemented in time. This combination of great compression and speed has made the BWT quite popular in compressors that have appeared since the late 1990s. For example, the bzip2 archiving package is very popular among network administrators.
That said, from a theoretical perspective the BWT suffers from an extraneous redundancy of bits. Until this gap was resolved, the theoretical community still preferred the semi-predictive method or another approach based on mixtures.
Another approach for using the BWT is to use only for learning the MDL tree source . To do so, note that when the BWT is run, it is possible to track the correspondences between contexts and segments of the BWT output. Therefore, information about per-segment symbol count is available, and can be easily applied to perform the tree pruning procedure that we have seen. Not only that, but some BWT computation algorithms (e.g., suffix tree approaches) maintain this information for all context depths and not just bounded . In short, the BWT allows to compute the minimizing tree in linear time [link] .
Given the minimizing tree , it is not obvious how to determine which state generated each character of (respectively, ) in linear time. It has been shown by Martín et al. [link] that this step can also be performed in linear time by developing a state machine whose states include the leaves of . The result is a two part code, where the first part computes the optimal via BWT, and the second part actually compresses by tracking which state of generated each of the symbols. To summarize, we have a linear complexity algorithm for compressing and decompressing a source while achieving the redundancy bounds for the class of tree sources.
We discussed in [link] for the problem of encoding a transition between two known i.i.d. distributions that
Therefore, a mixture over all parameter values yields a greater probability (and thus lower coding length) than the maximizing approach. Keep in mind that finding the optimal MDL tree source is analogous to the plug-in approach, and it would reduce the coding length if we could assign the probability as a mixture over all possible trees, where we assign trees with fewer leaves a greater weight. That is, ideally we want to implement
where is the length of the encoding procedure that we discussed for the tree structure , and is the probability for the sequence under the model .
Willems et al. showed how to implement such a mixture in a simple way over the class of tree sources of bounded depth . As before, the algorithm proceeds in a bottom up manner from leaves toward the root. At leaves, the probability assigned to symbols that were generated within that context is the Krichevsky-Trofimov probability, [link] . For that is an internal node whose depth is less than , the approach by Willems et al. [link] is to mix ( i ) the probabilities of keeping the branches for 0s and 1s and ( ii ) pruning,
It can be shown that this simple formula allows to implement a mixture over the class of bounded depth context tree sources, thus reducing the coding length w.r.t. the semi-predictive approach.
In fact, Willems later showed how to extend the context tree weighting (CTW) approach to tree sources of unbounded depth [link] . Unfortunately, while the basic bounded depth CTW has complexity that is comparable to the BWT, the unboundedCTW has potentially higher complexity.
Notification Switch
Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?