<< Chapter < Page | Chapter >> Page > |
We map the input to a sequence over a finite alphabet, where actual numerical values are obtained by a scalar function, . Ideally, should minimize distortion. Because we focus on square error distortion, the optimal is conditional expectation,
Keep in mind that in an actual compression system, the encoder knows , but the decoder does not. Therefore, the encoder must describe (a quantizedversion of) to the decoder. Putting aside these details, the main point is that the adaptive reproduction alphabet algorithm also acheives theRD function for stationary and ergodic sources while allowing a reduction in the size of the alphabet, thus accelerating the algorithm.
Scalar channel: Consider a scalar channel, , where , is generated by a stationalry and ergodic source , and is zero mean unit norm Gaussian noise. To avoid taking the analysis toward continuous valued , we can simplify the setting and temporarily consider discrete-valued and ; for example, we can consider a quantized version of the continuous problem.
Donoho [link] proposed the following reconstruction,
where is a possible reconstruction sequence, and is the Kolmogorov complexity of . Donoho calls this reconstruction algorithm the Kolmogorov sampler .
What is the Kolmogorov complexity [link] , [link] , [link] of the sequence ? It is the length of the shortest computer program that outputs and then halts. One may ask what computer language we can use for the program;after all, it is easy to imagine some programming language being well-tuned to a specific sequence . However, concepts such as universal Turing machines [link] can be used to put forward an abstract programming language that will run on any hypothetical computer. The key point is that the computer is limited to a state machinethat processes an infinite-length tape, and one state machine can emulate another by programming a compiler onto the tape.
Is it practical to talk about the shortest program that outputs and then halts? Not really, because we would need to run the Turing machine [link] (the computer) on an exponential number of programs. Moreover, it is well known that for some programs the Turingmachine never halts (there are endless loops), and technical conditions such as time-out mechanisms will need to be added in. Therefore, computing the Kolmogorov complexity is impractical [link] , [link] , [link] . However, for that was generated by a stationary ergodic source, it is possible to approximate by , the empirical entropy [link] , [link] .
What is the performance of Donoho's Kolmogorov sampler? Donoho proved that has the following distribution, [link] . That is, is sampled from the posterior. Therefore, given , we have that and are both generated by the same distribution.
Consider now that , the minimum mean square error estimator, is the center of mass of the posterior. Seeing that and are orthogonal,
Notification Switch
Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?