<< Chapter < Page Chapter >> Page >
f Y | X ( y | w ) = c 1 e - c 2 y - Φ w 2 ,

where c 1 = ( 2 π σ Z 2 ) - M / 2 and c 2 = 1 2 σ Z 2 are constants and · denotes the Euclidean norm. Plugging into [link] and taking log likelihoods, we obtain

x M A P = arg min w Ψ X ( w ) ,

where Ψ X ( · ) denotes the objective function (risk)

Ψ X ( w ) - ln ( f X ( w ) ) + c 2 y - Φ w 2 ;

our ideal risk would be Ψ X ( x M A P ) .

Instead of performing continuous-valued MAP estimation, we optimize for the MAP in the discretized domain α ˆ N . We begin with a technical condition on the input.

Condition 1 [link] We require that the probability density has bounded derivatives

d d x n ln ( f X ( x ) ) < ρ ,

where d d x n is the derivative w.r.t. entry n of x , n { 1 , ... , N } , and ρ > 0 .

Let x ˜ M A P be the quantization bin in α ˆ N nearest to x M A P Condition 1 ensures that a small perturbation from x M A P to x ˜ M A P does not change ln ( f X ( · ) ) by much. We use this fact to prove that Ψ X ( x ˜ M A P ) is sufficiently close to Ψ X ( x M A P ) asymptotically.

Theorem 9 [link] Let Φ R M × N be an i.i.d. Gaussian measurement matrix where each entry has mean zero and variance 1 M . Suppose that Condition 1 holds and the aspect ratio δ = m n > 0 , and let the noise z R M be i.i.d. zero-mean Gaussian.Then for all ϵ > 0 , the quantized estimator x ˜ M A P satisfies

Ψ X ( x M A P ) Ψ X ( x ˜ M A P ) < Ψ X ( x M A P ) + N ϵ

almost surely as N .

Given that Theorem 9 shows that the risk penalty due to quantization vanishes asymptotically in n , we now describe a Kolmogorov-inspired estimatorfor CS over a quantized grid. We define the energy ϵ ( w n ) :

ϵ ( w n ) n H k ( w n ) + c 4 y m - Φ w n 2 ,

where c 4 = c 2 log 2 ( e ) .

Ideally, our goal is to compute the globally minimum energy solution

x M A P H k arg min w n α ˆ n ϵ ( w n ) .

Theorem 10 [link] Let X be a bounded stationary ergodic source. Then the outcome w r n of Algorithm 1 after r iterations obeys

lim r ϵ ( w r n ) = min v n α ˆ n ϵ ( v n ) = ϵ x M A P H k .

We define a stochastic transition matrix P ( r ) from α ˆ n to itself given by the Boltzmann distribution for super-iteration r . Similarly, π ( r ) defines the stable state distribution on α ˆ n for P ( r ) , satisfying π ( r ) P ( r ) = π ( r ) .

Definition 4 [link] The Dobrushin's ergodic coefficient of a Markov chain transition matrix P is denoted by δ ( P ) and defined as

δ ( P ) max 1 i , j N 1 2 p i - p j 1 ,

where p i denotes the i t h row of P , 1 n N .

From the definition, 0 δ ( P ) 1 . Moreover, the ergodic coefficient can be rewritten as

δ ( P ) = 1 - min 1 i , j N k = 1 N min ( p i k , p j k ) ,

where p i j denotes the entry of P at the i t h row and j t h column.

We group the product of transition matrices across super-iterations as

P ( r 1 r 2 ) = r = r 1 r 2 P ( r ) .

There are two common characterizations for the stable-state behavior of a non-homogeneous MC.

Definition 5 [link] A non-homogeneous MC is called weakly ergodic if for any distributions μ and ν over the state space S , and any r 1 N ,

lim sup r 2 μ P ( r 1 r 2 ) - ν P ( r 1 r 2 ) 1 = 0 .

Similarly, a non-homogeneous MC is called strongly ergodic if there exists a distribution π over the state space S such that for any distribution μ over S , and any r 1 N ,

lim sup r 2 μ P ( r 1 r 2 ) - π 1 = 0 .

We will use the following two theorems from  [link] in our proof.

Theorem 11 [link] A Markov chain is weakly ergodic if and only if there exists a sequence of integers 0 r 1 r 2 ... such that

i = 1 1 - δ P ( r i r i + 1 ) = .

Theorem 12 [link] Let a Markov chain be weakly ergodic.Assume that there exists a sequence of probability distributions { π ( r ) } i = 1 on the state space S such that π ( r ) P ( r ) = π ( r ) . Then the Markov chain is strongly ergodic if

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Universal algorithms in signal processing and communications. OpenStax CNX. May 16, 2013 Download for free at http://cnx.org/content/col11524/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?

Ask