We assumed in the previous sections that we have a few
well-specified models (hypotheses) for a set of observations.These models were probabilistic; to apply the techniques of
statistical hypothesis testing, the models take the form ofconditional probability densities. In many interesting
circumstances, the exact nature of these densities may not beknown. For example, we may know
a priori that the mean is either zero or some constant (as in the
Gaussian example). However, the variance of the observationsmay not be known or the value of the non-zero mean may be in
doubt. In an array processing context, these respectivesituations could occur when the background noise level is
unknown (a likely possibility in applications) or when thesignal amplitude is not known because of far-field range
uncertainties (the further the source of propagating energy, thesmaller its received energy at each sensor). In an extreme
case, we can question the exact nature of the probabilitydensities (everything is not necessarily Gaussian!). The model
evaluation problem can still be posed for these situations; weclassify the "unknown" aspects of a model testing problem as
either
parametric (the variance is not known, for
example) or
nonparametric (the formula for the
density is in doubt). The former situation has a relativelylong history compared to the latter; many techniques can be used
to approach parametric problems while the latter is a subject ofcurrent research (
Gibson and Melsa ).
We concentrate on parametric problems here.
We describe the dependence of the conditional density on a set
of parameters by incorporating the parameter vector
as part of the condition. We write the likelihood function as
for the parametric problem. In statistics, this situation is
said to be a
composite hypothesis (
Cramr ). Such situations can be
further categorized according to whether the parameters are
random or
nonrandom .
For a parameter to be random, we have an expression for its
a priori density, which could depend on the
particular model. As stated many times, a specification of adensity usually expresses some knowledge about the range of
values a parameter may assume
and the
relative probability of those values. Saying that a parameterhas a uniform distribution implies that the values it assumes
are equally likely,
not that we have no idea what the values
might be and express this ignorance by a uniform distribution.If we are ignorant of the underlying probability distribution
that describes the values of a parameter, we will characterizethem simply as being
unknown (not random).
Once we have considered the
random
parameter case,
nonrandom but
unknown parameters will be discussed.