<< Chapter < Page | Chapter >> Page > |
In hypothesis testing , as in all other areas of statistical inference, there are two major schools of thought on designing good tests:Bayesian and frequentist (or classical). Consider the simple binary hypothesis testing problem In the Bayesian setup, the prior probability of each hypothesis occurring is assumed known. This approach to hypothesis testing is represented by the minimum Bayes risk criterion and the minimum probability of error criterion .
In some applications, however, it may not be
reasonable to assign an
The Neyman-Pearson criterion is stated in terms of certain probabilities associated with a particular hypothesis test. The relevant quantities are summarized in . Depending on the setting, different terminology is used.
Statistics | Signal Processing | |||
---|---|---|---|---|
Probability | Name | Notation | Name | Notation |
size | false-alarm probability | |||
power | detection probability |
Here
dentoes the probability that we declare hypothesis
to be in effect when
is actually in effect. The probabilities
and
(sometimes called the
miss probability),
are equal to
and
, respectively. Thus,
and
represent the two degrees of freedom in a binary
hypothesis test. Note that
and
do not involve
These two probabilities are related to
each other through the
decision
regions . If
is the decision region for
, we have
The densities
are nonnegative, so as
shrinks, both probabilities tend to zero. As
expands, both tend to one. The ideal case, where
and
, cannot occur unless the distributions do not overlap
(
Consider the simple binary hypothesis test of a scalar measurement : Suppose we use a threshold test where is a free parameter. Then the false alarm and detection probabilities are where denotes the Q-function . These quantities are depicted in . Since the -function is monotonicaly decreasing, it is evident that both and decay to zero as increases. There is also an explicit relationship A common means of displaying this relationship is with a receiver operating characteristic (ROC) curve, which is nothing more than a plot of versus ( ).
The Neyman-Pearson criterion says that we should construct our decision rule to have maximum probability ofdetection while not allowing the probability of false alarm to exceed a certain value . In other words, the optimal detector according to theNeyman-Pearson criterion is the solution to the following constrainted optimization problem:
Notification Switch
Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?