<< Chapter < Page | Chapter >> Page > |
Let consider a distribution with p.d.f. such that the parameter is not involved in the support of the distribution. We want to be able to find the maximum likelihood estimator by solving where here the partial derivative was used because involves .
That is, where now, with in this expression,
We can approximate the left-hand member of this latter equation by a linear function found from the first two terms of a Taylor’s series expanded about , namely when
Obviously, this approximation is good enough only if is close to , and an adequate mathematical proof involves those conditions. But a heuristic argument can be made by solving for to obtain
Recall that and
The expression (2) is the sum of the n independent and identically distributed random variables and thus the Central Limit Theorem has an approximate normal distribution with mean (in the continuous case) equal to
Clearly, the mathematical condition is needed that it is permissible to interchange the operations of integration and differentiation in those last steps. Of course, the integral of is equal to one because it is a p.d.f.
Since we know that the mean of each Y is let us take derivatives of each member of this equation with respect to obtaining
However, so
Since , this last expression provides the variance of Then the variance of expression (2) is n times this value, namely
Let us rewrite (1) as
The numerator of (4) has an approximate distribution; and those unstated mathematical condition require, in some sense for to converge to . Accordingly, the ratios given in equation (4) must be approximately . That is, has an approximate normal distribution with mean and standard deviation .
With the underlying exponential p.d.f. is the maximum likelihood estimator. Since and and , we have because . That is, has an approximate distribution with mean and standard deviation . Thus the random interval has an approximate probability of 0.95 for covering . Substituting the observed for , as well as for , we say that is an approximate 95% confidence interval for .
The maximum likelihood estimator for in is Now and and . Thus and has an approximate normal distribution with mean and standard deviation . Finally serves as an approximate 90% confidence interval for . With the data from example(…) and hence this interval is from 1.887 to 2.563.
It is interesting that there is another theorem which is somewhat related to the preceding result in that the variance of serves as a lower bound for the variance of every unbiased estimator of . Thus we know that if a certain unbiased estimator has a variance equal to that lower bound, we cannot find a better one and hence it is the best in the sense of being the unbiased minimum variance estimator . This is called the Rao-Cramer Inequality .
Let be a random sample from a distribution with p.d.f. where the support X does not depend upon so that we can differentiate, with respect to , under integral signs like that in the following integral:
If is an unbiased estimator of , then
Note that the two integrals in the respective denominators are the expectations and sometimes one is easier to compute that the other.
Note that above the lower bound of two distributions: exponential and Poisson was computed. Those respective lower bounds were and . Since in each case, the variance of equals the lower bound, then is the unbiased minimum variance estimator.
The sample arises from a distribution with p.d.f.
We have and
Since , the lower bound of the variance of every unbiased estimator of is . Moreover, the maximum likelihood estimator has an approximate normal distribution with mean and variance . Thus, in a limiting sense, is the unbiased minimum variance estimator of .
To measure the value of estimators; their variances are compared to the Rao-Cramer lower bound. The ratio of the Rao-Cramer lower bound to the actual variance of any unbiased estimator is called the efficiency of that estimator. As estimator with efficiency of 50% requires that 1/0.5=2 times as many sample observations are needed to do as well in estimation as can be done with the unbiased minimum variance estimator (then 100% efficient estimator).
Notification Switch
Would you like to follow the 'Introduction to statistics' conversation and receive update notifications?