<< Chapter < Page | Chapter >> Page > |
Let be given. There exists a set of points such that for all , , and for any with there is a point satisfying .
We construct in a greedy fashion. We first select an arbitrary point with . We then continue adding points to so that at step we add a point with which satisfies for all . This continues until we can add no more points (and hence for any with there is a point satisfying .) Now we wish to bound . Observe that if we center balls of radius at each point in , then these balls are disjoint and lie within a ball of radius . Thus, if denotes a ball of radius in , then
and hence
We now turn to our main theorem, which is based on the proof given in [link] .
Fix . Let be an random matrix whose entries are i.i.d. with . If
then satisfies the RIP of order with the prescribed with probability exceeding , where is arbitrary and .
First note that it is enough to prove [link] in the case , since is linear. Next, fix an index set with , and let denote the -dimensional subspace spanned by the columns of . We choose a finite set of points such that , for all , and for all with we have
From [link] , we can choose such a set with . We then repeat this process for each possible index set , and collect all the sets together:
There are possible index sets . We can bound this number by
where the last inequality follows since from Sterling's approximation we have . Hence . Since the entries of are drawn according to a strictly sub-Gaussian distribution, from [link] we have [link] . We next use the union bound to apply [link] to this set of points with , with the result that, with probability exceeding
we have
We observe that if satisfies [link] then
and thus [link] exceeds as desired.
We now define as the smallest number such that
Our goal is to show that . For this, we recall that for any with , we can pick a such that and such that (since if , we can pick satisfying ). In this case we have
Since by definition is the smallest number for which [link] holds, we obtain . Therefore
as desired. We have proved the upper inequality in [link] . The lower inequality follows from this since
which completes the proof.
Above we prove above that the RIP holds with high probability when the matrix is drawn according to a strictly sub-Gaussian distribution. However, we are often interested in signals that are sparse or compressible in some orthonormal basis , in which case we would like the matrix to satisfy the RIP. In this setting it is easy to see that by choosing our net of points in the -dimensional subspaces spanned by sets of columns of , [link] will establish the RIP for for again drawn from a sub-Gaussian distribution. This universality of with respect to the sparsity-inducing basis is an attractive property that was initially observed for the Gaussian distribution (based on symmetry arguments), but we can now see is a property of more general sub-Gaussian distributions. Indeed, it follows that with high probability such a will simultaneously satisfy the RIP with respect to an exponential number of fixed bases.
Notification Switch
Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?