<< Chapter < Page | Chapter >> Page > |
We now turn to the question of how to construct matrices that satisfy the restricted isometry property (RIP). It is possible to deterministically construct matrices of size that satisfy the RIP of order , but such constructions also require to be relatively large [link] , [link] . For example, the construction in [link] requires while the construction in [link] requires for some constant . In many real-world settings, these results would lead to an unacceptably large requirement on .
Fortunately, these limitations can be overcome by randomizing the matrix construction. We will construct our random matrices as follows: given and , generate random matrices by choosing the entries as independent realizations from some probability distribution. We begin by observing that if all we require is that then we may set and draw a according to a Gaussian distribution. With probability 1, any subset of columns will be linearly independent, and hence all subsets of columns will be bounded below by where . However, suppose we wish to know the constant . In order to find the value of the constant we must consider all possible -dimensional subspaces of . From a computational perspective, this is impossible for any realistic values of and . Thus, we focus on the problem of achieving the RIP of order for a specified constant . Our treatment is based on the simple approach first described in [link] and later generalized to a larger class of random matrices in [link] .
To ensure that the matrix will satisfy the RIP, we will impose two conditions on the random distribution. First, we require that the distribution will yield a matrix that is norm-preserving, which will require that
and hence the variance of the distribution is . Second, we require that the distribution is a sub-Gaussian distribution , meaning that there exists a constant such that
for all . This says that the moment-generating function of our distribution is dominated by that of a Gaussian distribution, which is also equivalent to requiring that tails of our distribution decay at least as fast as the tails of a Gaussian distribution. Examples of sub-Gaussian distributions include the Gaussian distribution, the Bernoulli distribution taking values , and more generally any distribution with bounded support. See "Sub-Gaussian random variables" for more details.
For the moment, we will actually assume a bit more than sub-Gaussianity. Specifically, we will assume that the entries of are strictly sub-Gaussian, which means that they satisfy [link] with . Similar results to the following would hold for general sub-Gaussian distributions, but to simplify the constants we restrict our present attention to the strictly sub-Gaussian . In this case we have the following useful result, which is proven in "Concentration of measure for sub-Gaussian random variables" .
Suppose that is an matrix whose entries are i.i.d. with drawn according to a strictly sub-Gaussian distribution with . Let for . Then for any , and any ,
and
with .
This tells us that the norm of a sub-Gaussian random vector strongly concentrates about its mean. Using this result, in "Proof of the RIP for sub-Gaussian matrices" we provide a simple proof based on that in [link] that sub-Gaussian matrices satisfy the RIP.
Fix . Let be an random matrix whose entries are i.i.d. with drawn according to a strictly sub-Gaussian distribution with . If
then satisfies the RIP of order with the prescribed with probability exceeding , where is arbitrary and .
Note that in light of the measurement bounds in "The restricted isometry property" we see that [link] achieves the optimal number of measurements (up to a constant).
Using random matrices to construct has a number of additional benefits. To illustrate these, we will focus on the RIP. First, one can show that for random constructions the measurements are democratic , meaning that it is possible to recover a signal using any sufficiently large subset of the measurements [link] , [link] . Thus, by using random one can be robust to the loss or corruption of a small fraction of the measurements. Second, and perhaps more significantly, in practice we are often more interested in the setting where is sparse with respect to some basis . In this case what we actually require is that the product satisfies the RIP. If we were to use a deterministic construction then we would need to explicitly take into account in our construction of , but when is chosen randomly we can avoid this consideration. For example, if is chosen according to a Gaussian distribution and is an orthonormal basis then one can easily show that will also have a Gaussian distribution, and so provided that is sufficiently high will satisfy the RIP with high probability, just as before. Although less obvious, similar results hold for sub-Gaussian distributions as well [link] . This property, sometimes referred to as universality , constitutes a significant advantage of using random matrices to construct .
Finally, we note that since the fully random matrix approach is sometimes impractical to build in hardware, several hardware architectures have been implemented and/or proposed that enable random measurements to be acquired in practical settings. Examples include the random demodulator [link] , random filtering [link] , the modulated wideband converter [link] , random convolution [link] , [link] , and the compressive multiplexer [link] . These architectures typically use a reduced amount of randomness and are modeled via matrices that have significantly more structure than a fully random matrix. Perhaps somewhat surprisingly, while it is typically not quite as easy as in the fully random case, one can prove that many of these constructions also satisfy the RIP.
Notification Switch
Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?