<< Chapter < Page | Chapter >> Page > |
and so , the mean square error, is double the MMSE [link] .
Let us pause to reflect about this result. When the SNR is high, i.e., , the MMSE should be rather low, and double the MMSE seems pretty good. On the other hand, when the SNR is low, the MMSE could be almost as large as , and double the MMSE could be larger – as much as twice larger – than . That is, gussing could give better signal estimation performance than using the Kolmogorov sampler. This pessimistic result encourages us to search for better signal reconstruction methods.
Arbitrary channels: So far we considered the Kolmogorov sampler for the white scalar channel, . Suppose instead that is processed or measured by a more complicated system,
Note that is known, e.g., in a compressed sensing application [link] , [link] would be a known matrix. An even more involved system would be , where is application of a mapping to x, , and denotes application of a random noise operator to . To keep the presentation simple, we use the additive noise setting [link] .
How can the Kolmogorov sampler [link] be applied to the additive noise setting? Recall that for the scalar channel, the Kolmogorov sampler minimizes subject to . For the arbitrary mapping with additive noise [link] , this implies . Therefore, we get
Another similar approach relies on optimization via Lagrange multipliers,
where the lagrange multiplier is 1, because both and are quantified in bits.
What is the performance of the Kolmogorov sampler for an arbitrary ? We speculate [link] , [link] that is generated by the posterior, and so is double the MMSE, where expectation is taken over the source and noise . These results remain to be shown rigorously.
We will now prove a substantial result – that the MCMC algorithm [link] , [link] , [link] converges to the globally minimal energy solution for the specific case of compressed sensing [link] , [link] . An extension of this proof to arbitrary channels is in progress.
If the operator in [link] is a matrix, and we denote it by where , then the setup is known as compressed sensing (CS) [link] , [link] and the estimation problem is commonly referred to as recovery or reconstruction.By posing a sparsity or compressibility requirement on the signal and using it as a prior during recovery, it is indeed possible to accurately estimate from in the CS setting.
With the quantization alphabet definition in [link] , will quantize to a greater resolution as increases. We will show that under suitable conditions on , performing maximum a posteriori (MAP) estimation over the discrete alphabet asymptotically converges to the MAP estimate over the continuous distribution . This reduces the complexity of the estimation problem from continuous to discrete.
We assume for exposition that we know the input statistics . Given the measurements , the MAP estimator for has the form
Because is i.i.d. Gaussian with mean zero and known variance ,
Notification Switch
Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?