<< Chapter < Page | Chapter >> Page > |
Sufficient statistics arise in nearly every aspect of statistical inference. It is important to understandthem before progressing to areas such as hypothesis testing and parameter estimation.
Suppose we observe an -dimensional random vector , characterized by the density or mass function , where is a -dimensional vector of parameters to be estimated. The functional form of is assumed known. The parameter completely determines the distribution of . Conversely, a measurement of provides information about through the probability law .
Suppose , where are IID. Here is a scalar parameter specifying the mean. The distribution of is determined by through the density On the other hand, if we observe , then we may safely assume is highly unlikely.
The -dimensional observation carries information about the -dimensional parameter vector . If , one may ask the following question: Can we compress into a low-dimensional statistic without any loss of information?Does there exist some function , where the dimension of is , such that carries all the useful information about ?
If so, for the purpose of studying we could discard the raw measurements and retain only the low-dimensional statistic . We call a sufficient statistic . The following definition captures this notion precisely:
1. Let denote the joint density or probability mass function on . If is a sufficient statistic for , then
2. Given , full knowledge of the measurement brings no additional information about . Thus, we may discard and retain on the compressed statistic .
3. Any inference strategy based on may be replaced by a strategy based on .
( Scharf, pp.78 ) Suppose a binary information source emitsa sequence of binary (0 or 1) valued, independent variables . Each binary symbol may be viewed as a realization of a Bernoulli trial: , iid. The parameter is to be estimated.
The probability mass function for the random sample is
We will show that is a sufficient statistic for . This will entail showing that the conditional probability massfunction does not depend on .
The distribution of the number of ones in independent Bernoulli trials is binomial: Next, consider the joint distribution of . We have Thus, the conditional probability may be written
In the previous example , suppose we wish to store in memory the information we possess about . Compare the savings, in terms of bits, we gain by storing the sufficientstatistic instead of the full sample .
In the example above , we had to guess the sufficient statistic, and work out theconditional probability by hand. In general, this will be a tedious way to go about finding sufficientstatistics. Fortunately, spotting sufficient statistics can be made easier by the Fisher-Neyman Factorization Theorem .
Sufficient statistics have many uses in statistical inference problems. In hypothesis testing, the Likelihood Ratio Test can often be reduced to a sufficient statistic of the data. In parameter estimation, the Minimum Variance Unbiased Estimator of a parameter can be characterized by sufficient statistics and the Rao-Blackwell Theorem .
Minimal sufficient statistics are, roughly speaking, sufficient statistics that cannot becompressed any more without losing information about the unknown parameter. Completeness is a technical characterization of sufficient statistics that allows one toprove minimality. These topics are covered in detail in this module.
Further examples of sufficient statistics may be found in the module on the Fisher-Neyman Factorization Theorem .
Notification Switch
Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?