<< Chapter < Page | Chapter >> Page > |
In addition to convex optimization and greedy pursuit approaches, there is another important class of sparse recovery algorithms that we will refer to as combinatorial algorithms . These algorithms, mostly developed by the theoretical computer science community, in many cases pre-date the compressive sensing literature but are highly relevant to the sparse signal recovery problem .
The oldest combinatorial algorithms were developed in the context of group testing [link] , [link] , [link] . In the group testing problem, we suppose that there are total items, of which an unknown subset of elements are anomalous and need to be identified. For example, we might wish to identify defective products in an industrial setting, or identify a subset of diseased tissue samples in a medical context. In both of these cases the vector indicates which elements are anomalous, i.e., for the anomalous elements and otherwise. Our goal is to design a collection of tests that allow us to identify the support (and possibly the values of the nonzeros) of while also minimizing the number of tests performed. In the simplest practical setting these tests are represented by a binary matrix whose entries are equal to 1 if and only if the item is used in the test. If the output of the test is linear with respect to the inputs, then the problem of recovering the vector is essentially the same as the standard sparse recovery problem.
Another application area in which combinatorial algorithms have proven useful is computation on data streams [link] , [link] . Suppose that represents the number of packets passing through a network router with destination . Simply storing the vector is typically infeasible since the total number of possible destinations (represented by a 32-bit IP address) is . Thus, instead of attempting to store directly, one can store where is an matrix with . In this context the vector is often called a sketch . Note that in this problem is computed in a different manner than in the compressive sensing context. Specifically, in the network traffic example we do not ever observe directly; rather, we observe increments to (when a packet with destination passes through the router). Thus we construct iteratively by adding the column to each time we observe an increment to , which we can do since is linear. When the network traffic is dominated by traffic to a small number of destinations, the vector is compressible, and thus the problem of recovering from the sketch is again essentially the same as the sparse recovery problem.
Several combinatorial algorithms for sparse recovery have been developed in the literature. A non-exhaustive list includes Random Fourier Sampling [link] , HHS Pursuit [link] , and Sparse Sequential Matching Pursuit [link] . We do not provide a full discussion of each of these algorithms; instead, we describe two simple methods that highlight the flavors of combinatorial sparse recovery — count-min and count-median .
Notification Switch
Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?