<< Chapter < Page | Chapter >> Page > |
Continuing from last time, we have a signal , an measurement matrix , and is the information we draw from .
We consider decoders mapping . We have been discussing whether there exists a decoder withcertain properties. So for this discussion (about information preservation), we can just think about optimal decoding.
While the previous result on sparse signal recovery is interesting, it is not very robust. What happens if our signal does not have support ? Can we still obtain meaningful results? The answer is yes. To do so we introduce more general input signal classes that allows fully supported signals. For example, we will consider the signal class defined by the unit ball
We will see that these questions are actually equivalent to a classical “ -width” problem. -widths have seen a great deal of work over the years by a variety of mathematicians: Kolmogorov, Tikhomirov, Kashin, Gluskin, etc. There are many different flavors of -widths, but we will study the Gelfand -width (the least intuitive of the widths).
Let be compact. Given , the Gelfand width (also called the dual width) is given by
In other words, we are looking for the subspace that slices through the set so that the norms of the projected signals are as small as possible. We can now state the following theorem about n-widths:
Provided that has the properties (1) and (2) , then
We start with the left-hand inequality. We want to take any encoder/decoder pair and use that to construct a . So let be an encoder/decoder. Then simply let . Now consider an and note that since . Let be the decoding of 0 (practically speaking, should be zero itself, but we avoid that assumption in this proof). Then
Now we prove the right-hand inequality. Assume we have a good . Suppose has codimension . Then (the orthogonal complement of in ) has dimension . Let be a basis for . Let be the matrix obtained by stacking the rows . Then . Define any element of if there is one (otherwise let be anything in ). Now look at the performance for some . Both and are elements of , so is in and in . Therefore . Thus,
From the proof of this theorem, we see that there is a matching between the matrices and the spaces (via the nullspace of ).
An important result is that is known for all except . A precise statement of these widths can be found in the book . A particularly important case is
Notification Switch
Would you like to follow the 'Compressive sensing' conversation and receive update notifications?