<< Chapter < Page | Chapter >> Page > |
Once a model is specified with its parameters and data have been collected, one is in a position to evaluate the model’s goodness of fit, that is, how well the model fits the observed pattern of data. Finding parameter values of a model that best fits the data — a procedure called parameter estimation, which assesses goodness of fit .
There are two generally accepted methods of parameter estimation. They are least squares estimation (LSE) and maximum likelihood estimation (MLE) . The former is well known as linear regression, the sum of squares error, and the root means squared deviation is tied to the method. On the other hand, MLE is not widely recognized among modelers in psychology, though it is, by far, the most commonly used method of parameter estimation in the statistics community. LSE might be useful for obtaining a descriptive measure for the purpose of summarizing observed data, but MLE is more suitable for statistical inference such as model comparison. LSE has no basis for constructing confidence intervals or testing hypotheses whereas both are naturally built into MLE.
UNBIASED AND BIASED ESTIMATORS
Let consider random variables for which the functional form of the p.d.f. is know, but the distribution depends on an unknown parameter , that may have any value in a set , which is called the parameter space . In estimation the random sample from the distribution is taken to elicit some information about the unknown parameter . The experiment is repeated n independent times, the sample is observed and one try to guess the value of using the observations
The function of used to guess is called an estimator of . We want it to be such that the computed estimate is usually close to . Let be an estimator of . If Y to be a good estimator of , a very desirable property is that it means be equal to , namely .
It is required not only that an estimator has expectation equal to , but also the variance of the estimator should be as small as possible. If there are two unbiased estimators of , it could be probably possible to choose the one with the smaller variance. In general, with a random sample of a fixed sample size n , a statistician might like to find the estimator of an unknown parameter which minimizes the mean (expected) value of the square error (difference) that is, minimizes
The statistic Y that minimizes is the one with minimum mean square error. If we restrict our attention to unbiased estimators only, then and the unbiased statistics Y that minimizes this expression is said to be the unbiased minimum variance estimator of .
One of the oldest procedures for estimating parameters is the method of moments . Another method for finding an estimator of an unknown parameter is called the method of maximum likelihood . In general, in the method of moments, if there are k parameters that have to be estimated, the first k sample moments are set equal to the first k population moments that are given in terms of the unknown parameters.
Let the distribution of X be . Then and . Given a random sample of size n , the first two moments are given by and
We set and and solve for and , and
The first equation yields as the estimate of . Replacing with in the second equation and solving for ,
we obtain for the solution of .
Thus the method of moment estimators for and are and Of course, is unbiased whereas is biased.
At this stage arises the question, which of two different estimators and , for a parameter one should use. Most statistician select he one that has the smallest mean square error, for example, then seems to be preferred. This means that if , then one would select the one with the smallest variance.
Next, other questions should be considered. Namely, given an estimate for a parameter, how accurate is the estimate? How confident one is about the closeness of the estimate to the unknown parameter?
Notification Switch
Would you like to follow the 'Introduction to statistics' conversation and receive update notifications?