<< Chapter < Page Chapter >> Page >
Pr ( w i = b | w n i ) = Pr ( w i = b , w n i ) Pr ( w n i ) = Pr ( w i = b , w n i ) b ' α ˆ Pr ( w i = b ' , w n i ) = 1 Z t exp { - 1 t ϵ ( w i = b , w n i ) } b ' 1 Z t exp { - 1 t ϵ ( w i = b ' , w n i ) } .

We can now write,

ϵ ( y i = b , y n i ) = ϵ ( y i = a ) + Δ H ( b , a ) - β Δ d ( b , a ) ,

where Δ H k ( y i - 1 b y i + 1 n , a ) is the change in H k ( w n ) when y i = a is replaced by b , and Δ d ( b , a , x i ) = d ( b , x i ) - d ( a , x i ) = ( b - x i ) 2 - ( a - x i ) 2 is the change in distortion. Combining [link] and [link] ,

Pr ( w i = b | w n i ) = exp { - 1 t [ ϵ ( a ) + Δ H ( b , a ) - β Δ d ( b , a ) ] } b ' exp { - 1 t [ ϵ ( a ) + Δ H ( b ' , a ) - β Δ d ( b ' , a ) ] } = exp { - 1 t · 0 } b ' exp { - 1 t [ Δ H ( b ' , a ) - β Δ d ( b ' , a ) - Δ H ( b , a ) + β Δ d ( b , a ) ] } = 1 b ' exp { - 1 t [ Δ H ( b ' , b ) - β Δ d ( b ' , b ) ] }

The maximum change in the energy within an iteration of MCMC algorithm is then bounded by

Δ k = max 1 i n max w n α ˆ n max b , b ' α ˆ | n Δ H k ( b , b ' ) + c 4 Δ d ( b , b ' ) | .

We refer to the resampling from a single location as an iteration, and group the n possible locations into super-iterations. Baron and Weissman  [link] recommend an ordering where each super-iteration scans a permutation of all n locations of the input, because in this manner each location is scanned fairly often. Other orderingsare possible, including a completely random order.

During the simulated annealing, the temperature t is gradually increased, where in super-iteration i we use t = O ( 1 / log ( i ) )   [link] , [link] . In each iteration, the Gibbs sampler modifies w n in a random manner that resembles heat bath concepts in statistical physics. AlthoughMCMC could sink into a local minimum, we decrease the temperature slowly enough that the randomness of Gibbs sampling eventually drives MCMC out of the localminimum toward the set of minimal energy solutions, which includes x n ˆ , because low temperature t favors low-energy w n . Pseudo-code for our encoder appears in Algorithm 1 below.

Algorithm 1 : Lossy encoder with fixed reproduction alphabet Input: x n α n , α ˆ , β , c , r Output: bit-stream Procedure:

  1. Initialize w by quantizing x with α ˆ
  2. Initialize n w ( · , · ) using w
  3. for r = 1 to R do // super-iteration
  4.      t r c n Δ k log ( r ) // temperature
  5.      Draw permutation of numbers { 1 , ... , n } at random
  6.      for r ' = 1 to n do
  7.         Let  j be component t ' in permutation
  8.         Generate new w j using f s ( w j = · | w n j )
  9.         Update n w ( · , · ) [ · ]
  10. Apply CTW to w n // compress outcome

Computational issues

Looking at the pseudo-code, it is clear that the following could be computational bottlenecks:

  1. Initializing n w ( · , · ) - a naive implementation needs to scan the sequence w n (complexity O ( n ) ) and initialize a data structure with | α ˆ | K + 1 elements. Unless K log | α ˆ | ( n ) , this is super linear in n . Therefore, we recall that K = o ( log ( n ) ) , and initializing n w ( · , · ) requires linear complexity O ( n ) .
  2. The inner loop is run r n times, and each time computing Pr ( w j = b | w n j ) for all possible b α ˆ might be challenging. In particular, let us consider computation of Δ d and Δ H .
    1. Computation of Δ d requires constant time, and is not burdensome.
    2. Computation of Δ H requires to modify the symbol counts for each context that was modified. A key contribution by Jalali and Weissman was to recognize that the array of symbol counts, n w ( · , · ) , would change in O ( k ) locations, where k is the context order. Therefore, each computation of Δ H requires O ( k ) time. Seeing that | α ˆ | such computations per iteration are needed, and there are r n iterations, this is O ( k r n | α ˆ | ) .
    3. Updating n w ( · , · ) after w j is re-sampled from the Boltzmann distribution also requires O ( k ) time. However, this step is performed only once per iteration, and not | α ˆ | times. Therefore, this step requires less computation than step (b).

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Universal algorithms in signal processing and communications. OpenStax CNX. May 16, 2013 Download for free at http://cnx.org/content/col11524/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?

Ask