<< Chapter < Page Chapter >> Page >

where Q is the prior induced by the coding length function l .

Minimal redundancy

Note that

w , l , sup θ r n ( l , θ ) Λ w ( d θ ) r n ( l , θ ) inf l c n Λ w ( d θ ) r n ( l , θ ) .

Therefore,

R n + = inf l sup θ r n ( l , θ ) sup w inf l Λ w ( d θ ) r n ( l , θ ) = R n - .

In fact, Gallager showed that R n + = R n - . That is, the min-max and max-min redundancies are equal.

Let us revisit the Bernoulli source p θ where θ Λ = [ 0 , 1 ] . From the definition of [link] , which relies on a uniform prior for the sources, i.e., w ( θ ) = 1 , θ Λ , it can be shown that there there exists a universal code with length function l such that

E θ [ l ( x n ) ] n E θ h 2 n x ( 1 ) n + log ( n + 1 ) + 2 ,

where h 2 ( p ) = - p log ( p ) - ( 1 - p ) log ( 1 - p ) is the binary entropy. That is, the redundancy is approximately log ( n ) bits. Clarke and Barron  [link] studied the weighting approach,

p ( x ) = Λ d w ( θ ) p θ ( x ) ,

and constructed a prior that achieves R n - = R n + precisely for memoryless sources.

Theorem 5 [link] For memoryless source with an alphabet of size r , θ = ( p ( 0 ) , p ( 1 ) , , p ( r - 1 ) ) ,

n R n - ( w ) = r - 1 2 log n 2 π e + Λ w ( d θ ) log | I ( θ ) | w ( θ ) + O n ( 1 ) ,

where O n ( 1 ) vanishes uniformly as n for any compact subset of Λ , and

I ( θ ) E ln p θ ( x i ) θ ln p θ ( x i ) θ T

is Fisher's information.

Note that when the parameter is sensitive to change we have large I ( θ ) , which increases the redundancy. That is, good sensitivity means bad universal compression.

Denote

J ( θ ) = | I ( θ ) | Λ | I ( θ ' ) | d θ ' ,

this is known as Jeffrey's prior . Using w ( θ ) = J ( θ ) , it can be shownthat R n - = R n + .

Let us derive the Fisher information I ( θ ) for the Bernoulli source,

p θ ( x ) = θ n x ( 1 ) · ( 1 - θ ) n x ( 0 ) ln p θ ( x ) = n x ( 1 ) ln θ + n x ( 0 ) ln ( 1 - θ ) ln p θ ( x ) θ = n x ( 1 ) 1 θ - n x ( 0 ) 1 1 - θ ln p θ ( x ) θ 2 = n x 2 ( 1 ) θ 2 + n x 2 ( 0 ) ( 1 - θ ) 2 - 2 n x ( 1 ) n x ( 0 ) θ ( 1 - θ ) E ln p θ ( x ) θ 2 = θ θ 2 + 1 - θ ( 1 - θ ) 2 - 2 θ ( 1 - θ ) E [ n x ( 1 ) n x ( 0 ) ] = 1 θ + 1 1 - θ - 0 = 1 θ ( 1 - θ ) .

Therefore, the Fisher information satisfies I ( θ ) = 1 θ ( 1 - θ ) .

Recall the Krichevsky–Trofimov coding, which was mentioned in  [link] . Using the definition of Jeffreys' prior [link] , we see that J ( θ ) 1 θ ( 1 - θ ) . Taking the integral over Jeffery's prior,

p J ( x n ) = 0 1 c d θ θ ( 1 - θ ) θ n x ( 1 ) ( 1 - θ ) n x ( 0 ) = c 0 1 θ n x ( 1 ) - 1 2 ( 1 - θ ) n x ( 0 ) - 1 2 d θ = Γ ( n x ( 0 ) + 1 2 ) Γ ( n x ( 1 ) + 1 2 ) π Γ ( n + 1 ) ,

where we used the gamma function. It can be shown that

p J ( x n ) = t = 0 n p J ( x t + 1 | x 1 t ) ,

where

p J ( x t + 1 | x 1 t ) = p J ( x 1 t + 1 ) p J ( x 1 t ) , p J ( x t + 1 = 0 | x 1 t ) = n x t ( 0 ) + 1 2 t + 1 , p J ( x t + 1 = 1 | x 1 t ) = n x t ( 1 ) + 1 2 t + 1 .

Similar to before, this universal code can be implemented sequentially. It is due to Krichevsky and Trofimov  [link] , its redundancy satisfies   Theorem 5 by Clarke and Barron  [link] , and it is commonly used in universal lossless compression.

Rissanen's bound

Let us consider – on an intuitive level – why C n r - 1 2 log ( n ) n . Expending r - 1 2 log ( n ) bits allows to differentiate between ( n ) r - 1 parameter vectors. That is, we would differentiate between each of the r - 1 parameters with n levels. Now consider a Bernoulli RV with (unknown) parameter θ .

One perspective is that with n drawings of the RV, the standard deviation in the number of 1's is O ( n ) . That is, n levels differentiate between parameter levels up to a resolution that reflects the randomness of the experiment.

A second perspective is that of coding a sequence of Bernoulli outcomes with an imprecise parameter,where it is convenient to think of a universal code in terms of first quantizing the parameter and then using that (imprecise) parameter to encode the input x . For the Bernoulli example, the maximum likelihood parameter θ M L satisfies

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Universal algorithms in signal processing and communications. OpenStax CNX. May 16, 2013 Download for free at http://cnx.org/content/col11524/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?

Ask