<< Chapter < Page Chapter >> Page >

Additional properties

The fundamental properties of simple random variables which survive the extension serve as the basis of an extensive and powerful list of properties of expectation of real randomvariables and real functions of random vectors. Some of the more important of these are listed in the table in Appendix E . We often refer to these properties by the numbers used in that table.

Some basic forms

The mapping theorems provide a number of basic integral (or summation) forms for computation.

  1. In general, if Z = g ( X ) with distribution functions F X and F Z , we have the expectation as a Stieltjes integral.
    E [ Z ] = E [ g ( X ) ] = g ( t ) F X ( d t ) = u F Z ( d u )
  2. If X and g ( X ) are absolutely continuous, the Stieltjes integrals are replaced by
    E [ Z ] = g ( t ) f X ( t ) d t = u f Z ( u ) d u
    where limits of integration are determined by f X or f Y . Justification for use of the density function is provided by the Radon-Nikodym theorem—property (E19) .
  3. If X is simple, in a primitive form (including canonical form), then
    E [ Z ] = E [ g ( X ) ] = j = 1 m g ( c j ) P ( C j )
    If the distribution for Z = g ( X ) is determined by a csort operation, then
    E [ Z ] = k = 1 n v k P ( Z = v k )
  4. The extension to unbounded, nonnegative, integer-valued random variables is shown in [link] , above. The finite sums are replaced by infinite series (provided they converge).
  5. For Z = g ( X , Y ) ,
    E [ Z ] = E [ g ( X , Y ) ] = g ( t , u ) F X Y ( d t d u ) = v F Z ( d v )
  6. In the absolutely continuous case
    E [ Z ] = E [ g ( X , Y ) ] = g ( t , u ) f X Y ( t , u ) d u d t = v f Z ( v ) d v
  7. For joint simple X , Y (Section on Expectation for Simple Random Variables )
    E [ Z ] = E [ g ( X , Y ) ] = i = 1 n j = 1 m g ( t i , u j ) P ( X = t i , Y = u j )

Mechanical interpretation and approximation procedures

In elementary mechanics, since the total mass is one, the quantity E [ X ] = t f X ( t ) d t is the location of the center of mass. This theoretically rigorous fact may be derived heuristically from an examination of the expectation for a simpleapproximating random variable. Recall the discussion of the m-procedure for discrete approximation in the unit on Distribution Approximations The range of X is divided into equal subintervals. The values of the approximating random variable are at the midpoints of the subintervals. The associatedprobability is the probability mass in the subinterval, which is approximately f X ( t i ) d x , where d x is the length of the subinterval. This approximation improves with an increasing number of subdivisions, with corresponding decrease in d x . The expectation of the approximating simple random variable X s is

E [ X s ] = i t i f X ( t i ) d x t f X ( t ) d t

The approximation improves with increasingly fine subdivisions. The center of mass of the approximating distribution approaches the center of mass of the smooth distribution.

It should be clear that a similar argument for g ( X ) leads to the integral expression

E [ g ( X ) ] = g ( t ) f X ( t ) d t

This argument shows that we should be able to use tappr to set up for approximating the expectation E [ g ( X ) ] as well as for approximating P ( g ( X ) M ) , etc. We return to this in [link] .

Mean values for some absolutely continuous distributions

  1. Uniform on [ a , b ] f X ( t ) = 1 b - a , a t b The center of mass is at ( a + b ) / 2 . To calculate the value formally, we write
    E [ X ] = t f X ( t ) d t = 1 b - a a b t d t = b 2 - a 2 2 ( b - a ) = b + a 2
  2. Symmetric triangular on [ a , b ] The graph of the density is an isoceles triangle with base on the interval [ a , b ] . By symmetry, the center of mass, hence the expectation, is at the midpoint ( a + b ) / 2 .
  3. Exponential ( λ ) . f X ( t ) = λ e - λ t , 0 t Using a well known definite integral (see Appendix B ), we have
    E [ X ] = t f X ( t ) d t = 0 λ t e - λ t d t = 1 / λ
  4. Gamma ( α , λ ) . f X ( t ) = 1 Γ ( α ) t α - 1 λ α e - λ t , 0 t Again we use one of the integrals in Appendix B to obtain
    E [ X ] = t f X ( t ) d t = 1 Γ ( α ) 0 λ α t α e - λ t d t = Γ ( α + 1 ) λ Γ ( α ) = α / λ
    The last equality comes from the fact that Γ ( α + 1 ) = α Γ ( α ) .
  5. Beta ( r , s ) . f X ( t ) = Γ ( r + s ) Γ ( r ) Γ ( s ) t r - 1 ( 1 - t ) s - 1 , 0 < t < 1 We use the fact that 0 1 u r - 1 ( 1 - u ) s - 1 d u = Γ ( r ) Γ ( s ) Γ ( r + s ) , r > 0 , s > 0 .
    E [ X ] = t f X ( t ) d t = Γ ( r + s ) Γ ( r ) Γ ( s ) 0 1 t r ( 1 - t ) s - 1 d t = Γ ( r + s ) Γ ( r ) Γ ( s ) · Γ ( r + 1 ) Γ ( s ) Γ ( r + s + 1 ) = r r + s
  6. Weibull ( α , λ , ν ) . F X ( t ) = 1 - e - λ ( t - ν ) α α > 0 , λ > 0 , ν 0 , t ν . Differentiation shows
    f X ( t ) = α λ ( t - ν ) α - 1 e - λ ( t - ν ) α , t ν
    First, consider Y exponential ( λ ) . For this random variable
    E [ Y r ] = 0 t r λ e - λ t d t = Γ ( r + 1 ) λ r
    If Y is exponential (1), then techniques for functions of random variables show that 1 λ Y 1 / α + ν Weibull ( α , λ , ν ) . Hence,
    E [ X ] = 1 λ 1 / α E [ Y 1 / α ] + ν = 1 λ 1 / α Γ ( 1 α + 1 ) + ν
  7. Normal ( μ , σ 2 ) The symmetry of the distribution about t = μ shows that E [ X ] = μ . This, of course, may be verified by integration. A standard trick simplifies the work.
    E [ X ] = - t f X ( t ) d t = - ( t - μ ) f X ( t ) d t + μ
    We have used the fact that - f X ( t ) d t = 1 . If we make the change of variable x = t - μ in the last integral, the integrand becomes an odd function, so that the integral is zero. Thus, E [ X ] = μ .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Applied probability. OpenStax CNX. Aug 31, 2009 Download for free at http://cnx.org/content/col10708/1.6
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Applied probability' conversation and receive update notifications?

Ask