<< Chapter < Page | Chapter >> Page > |
An important class of sparse recovery algorithms fall under the purview of convex optimization . Algorithms in this category seek to optimize a convex function of the unknown variable over a (possibly unbounded) convex subset of .
Let be a convex sparsity-promoting cost function (i.e., is small for sparse .) To recover a sparse signal representation from measurements , we may either solve
when there is no noise, or solve
when there is noise in the measurements. Here, is a cost function that penalizes the distance between the vectors and . For an appropriate penalty parameter , [link] is equivalent to the unconstrained formulation:
for some . The parameter may be chosen by trial-and-error, or by statistical techniques such as cross-validation [link] .
For convex programming algorithms, the most common choices of and are usually chosen as follows: , the -norm of , and , the -norm of the error between the observed measurements and the linear projections of the target vector . In statistics, minimizing this subject to is known as the Lasso problem. More generally, acts as a regularization term and can be replaced by other, more complex, functions; for example, the desired signal may be piecewise constant, and simultaneously have a sparse representation under a known basis transform . In this case, we may use a mixed regularization term:
It might be tempting to use conventional convex optimization packages for the above formulations ( [link] , [link] , and [link] ). Nevertheless, the above problems pose two key challenges which are specific to practical problems encountered in CS : (i) real-world applications are invariably large-scale (an image of a resolution of pixels leads to optimization over a million variables, well beyond the reach of any standard optimization software package); (ii) the objective function is nonsmooth, and standard smoothing techniques do not yield very good results. Hence, for these problems, conventional algorithms (typically involving matrix factorizations) are not effective or even applicable. These unique challenges encountered in the context of CS have led to considerable interest in developing improved sparse recovery algorithms in the optimization community.
In the noiseless case, the -minimization problem (obtained by substituting in [link] ) can be recast as a linear program (LP) with equality constraints. These can be solved in polynomial time ( ) using standard interior-point methods [link] . This was the first feasible reconstruction algorithm used for CS recovery and has strong theoretical guarantees, as shown earlier in this course . In the noisy case, the problem can be recast as a second-order cone program (SOCP) with quadratic constraints. Solving LPs and SOCPs is a principal thrust in optimization research; nevertheless, their application in practical CS problems is limited due to the fact that both the signal dimension , and the number of constraints , can be very large in many scenarios. Note that both LPs and SOCPs correspond to the constrained formulations in [link] and [link] and are solved using first order interior-point methods.
Notification Switch
Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?