<< Chapter < Page Chapter >> Page >

The integral transform character of these entities implies that there is essentially a one-to-one relationship between the transform and the distribution.

Moments

The name and some of the importance of the moment generating function arise from the fact that the derivatives of M X evaluateed at s = 0 are the moments about the origin. Specifically

M X ( k ) ( 0 ) = E [ X k ] , provided the k th moment exists

Since expectation is an integral and because of the regularity of the integrand, we may differentiate inside the integral with respect to the parameter.

M X ' ( s ) = d d s E [ e s X ] = E d d s e s X = E [ X e s X ]

Upon setting s = 0 , we have M X ' ( 0 ) = E [ X ] . Repeated differentiation gives the general result. The corresponding result for the characteristic function is φ ( k ) ( 0 ) = i k E [ X k ] .

The exponential distribution

The density function is f X ( t ) = λ e - λ t for t 0 .

M X ( s ) = E [ e s X ] = 0 λ e - ( λ - s ) t d t = λ λ - s
M X ' ( s ) = λ ( λ - s ) 2 M X ' ' ( s ) = 2 λ ( λ - s ) 3
E [ X ] = M X ' ( 0 ) = λ λ 2 = 1 λ E [ X 2 ] = M X ' ' ( 0 ) = 2 λ λ 3 = 2 λ 2

From this we obtain Var [ X ] = 2 / λ 2 - 1 / λ 2 = 1 / λ 2 .

Got questions? Get instant answers now!

The generating function does not lend itself readily to computing moments, except that

g X ' ( s ) = k = 1 k s k - 1 P ( X = k ) so that g X ' ( 1 ) = k = 1 k P ( X = k ) = E [ X ]

For higher order moments, we may convert the generating function to the moment generating function by replacing s with e s , then work with M X and its derivatives.

The poisson ( μ ) Distribution

P ( X = k ) = e - μ μ k k ! , k 0 , so that

g X ( s ) = e - μ k = 0 s k μ k k ! = e - μ k = 0 ( s μ ) k k ! = e - μ e μ s = e μ ( s - 1 )

We convert to M X by replacing s with e s to get M X ( s ) = e μ ( e s - 1 ) . Then

M X ' ( s ) = e μ ( e s - 1 ) μ e s M X ' ' ( s ) = e μ ( e s - 1 ) [ μ 2 e 2 s + μ e s ]

so that

E [ X ] = M X ' ( 0 ) = μ , E [ X 2 ] = M X ' ' ( 0 ) = μ 2 + μ , and Var [ X ] = μ 2 + μ - μ 2 = μ

These results agree, of course, with those found by direct computation with the distribution.

Got questions? Get instant answers now!

Operational properties

We refer to the following as operational properties .

  • If Z = a X + b , then
    M Z ( s ) = e b s M X ( a s ) , φ Z ( u ) = e i u b φ X ( a u ) , g Z ( s ) = s b g X ( s a )
    For the moment generating function, this pattern follows from
    E [ e ( a X + b ) s ] = s b s E [ e ( a s ) X ]
    Similar arguments hold for the other two.
  • If the pair { X , Y } is independent, then
    M X + Y ( s ) = M X ( s ) M Y ( s ) , φ X + Y ( u ) = φ X ( u ) φ Y ( u ) , g X + Y ( s ) = g X ( s ) g Y ( s )
    For the moment generating function, e s X and e s Y form an independent pair for each value of the parameter s . By the product rule for expectation
    E [ e s ( X + Y ) ] = E [ e s X e s Y ] = E [ e s X ] E [ e s Y ]
    Similar arguments are used for the other two transforms.
    A partial converse for (T2) is as follows:
  • If M X + Y ( s ) = M X ( s ) M Y ( s ) , then the pair { X , Y } is uncorrelated. To show this, we obtain two expressions for E [ ( X + Y ) 2 ] , one by direct expansion and use of linearity, and the other by taking the second derivative of the moment generatingfunction.
    E [ ( X + Y ) 2 ] = E [ X 2 ] + E [ Y 2 ] + 2 E [ X Y ]
    M X + Y ' ' ( s ) = [ M X ( s ) M Y ( s ) ] ' ' = M X ' ' ( s ) M Y ( s ) + M X ( s ) M Y ' ' ( s ) + 2 M X ' ( s ) M Y ' ( s )
    On setting s = 0 and using the fact that M X ( 0 ) = M Y ( 0 ) = 1 , we have
    E [ ( X + Y ) 2 ] = E [ X 2 ] + E [ Y 2 ] + 2 E [ X ] E [ Y ]
    which implies the equality E [ X Y ] = E [ X ] E [ Y ] .

Note that we have not shown that being uncorrelated implies the product rule.

We utilize these properties in determining the moment generating and generating functions for several of our common distributions.

Some discrete distributions

  1. Indicator function X = I E P ( E ) = p
    g X ( s ) = s 0 q + s 1 p = q + p s M X ( s ) = g X ( e s ) = q + p e s
  2. Simple random variable X = i = 1 n t i I A i (primitive form) P ( A i ) = p i
    M X ( s ) = i = 1 n e s t i p i
  3. Binomial ( n , p ) . X = i = 1 n I E i with { I E i : 1 i n } iid P ( E i ) = p
    We use the product rule for sums of independent random variables and the generating function for the indicator function.
    g X ( s ) = i = 1 n ( q + p s ) = ( q + p s ) n M X ( s ) = ( q + p e s ) n
  4. Geometric ( p ) . P ( X = k ) = p q k k 0 E [ X ] = q / p We use the formula for the geometric series to get
    g X ( s ) = k = 0 p q k s k = p k = 0 ( q s ) k = p 1 - q s M X ( s ) = p 1 - q e s
  5. Negative binomial ( m , p ) If Y m is the number of the trial in a Bernoulli sequence on which the m th success occurs, and X m = Y m - m is the number of failures before the m th success, then
    P ( X m = k ) = P ( Y m - m = k ) = C ( - m , k ) ( - q ) k p m
    where C ( - m , k ) = - m ( - m - 1 ) ( - m - 2 ) ( - m - k + 1 ) k !
    The power series expansion about t = 0 shows that
    ( 1 + t ) - m = 1 + C ( - m , 1 ) t + C ( - m , 2 ) t 2 + for - 1 < t < 1
    Hence
    M X m ( s ) = p m k = 0 C ( - m , k ) ( - q ) k e s k = p 1 - q e s m
    Comparison with the moment generating function for the geometric distribution shows that X m = Y m - m has the same distribution as the sum of m iid random variables, each geometric ( p ) . This suggests that the sequence is characterized by independent, successive waiting times to success. This also shows that the expectation and variance of X m are m times the expectation and variance for the geometric. Thus
    E [ X m ] = m q / p and Var [ X m ] = m q / p 2
  6. Poisson ( μ ) P ( X = k ) = e - μ μ k k ! k 0 In [link] , above, we establish g X ( s ) = e μ ( s - 1 ) and M X ( s ) = e μ ( e s - 1 ) . If { X , Y } is an independent pair, with X Poisson ( λ ) and Y Poisson ( μ ) , then Z = X + Y Poisson ( λ + μ ) . Follows from (T1) and product of exponentials.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Applied probability. OpenStax CNX. Aug 31, 2009 Download for free at http://cnx.org/content/col10708/1.6
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Applied probability' conversation and receive update notifications?

Ask