<< Chapter < Page | Chapter >> Page > |
We now show how to exploit the concentration of measure properties of sub-Gaussian distributions to provide a simple proof that sub-Gaussian matrices satisfy the restricted isometry property (RIP). Specifically, we wish to show that by constructing an matrix at random with sufficiently large, then with high probability there exists a such that
holds for all (where denotes the set of all signals with at most nonzeros).
We begin by observing that if all we require is that , then we may set and draw a according to a Gaussian distribution, or indeed any continuous univariate distribution. In this case, with probability 1, any subset of columns will be linearly independent, and hence all subsets of columns will be bounded below by where . However, suppose we wish to know the constant . In order to find the value of the constant we must consider all possible -dimensional subspaces of . From a computational perspective, this is impossible for any realistic values of and . Moreover, in light of the lower bounds we described earlier in this course, the actual value of in this case is likely to be very close to 1. Thus, we focus instead on the problem of achieving the RIP of order for a specified constant .
To ensure that the matrix will satisfy the RIP, we will impose two conditions on the random distribution. First, we require that the distribution is sub-Gaussian. In order to simplify our argument, we will use the simpler results stated in Corollary 2 from "Concentration of measure for sub-Gaussian random variables" , which we briefly recall.
Suppose that is an matrix whose entries are i.i.d. with . Let for . Then for any , and any ,
and
with .
By exploiting this theorem, we assume that the distribution used to construct is strictly sub-Gaussian. This is done simply to yield more concrete constants. The argument could easily be modified to establish a similar result for general sub-Gaussian distributions by instead using Theorem 2 from "Concentration of measure for sub-Gaussian random variables" .
Our second condition is that we require that the distribution yield a matrix that is approximately norm-preserving, which will require that
and hence the variance is .
We shall now show how the concentration of measure inequality in [link] can be used together with covering arguments to prove the RIP for sub-Gaussian random matrices. Our general approach will be to construct nets of points in each -dimensional subspace, apply [link] to all of these points through a union bound, and then extend the result from our finite set of points to all possible -dimensional signals. Thus, in order to prove the result, we will require the following upper bound on the number of points required to construct the nets of points. (For an overview of results similar to [link] and of various related concentration of measure results, we refer the reader to the excellent introduction of [link] .)
Notification Switch
Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?