The criterion used in the previous section---minimize the
average cost of an incorrect decision---may seem to be acontrived way of quantifying decisions. Well, often it is. For
example, the Bayesian decision rule depends explicitly on the
a priori probabilities. A rational method of
assigning values to these---either by experiment or through trueknowledge of the relative likelihood of each model---may be
unreasonable. In this section, we develop alternative decisionrules that try to respond to such objections. One essential point
will emerge from these considerations:
the likelihood
ratio persists as the core of optimal detectors asoptimization criteria and problem complexity change .
Even criteria remote fromperformance error measures can result in the likelihood ratio test.
Such an invariance does not occur often in signal processing andunderlines the likelihood ratio test's importance.
Maximizing the probability of a correct decision
As only one model can describe any given set of data (the
models are mutually exclusive), the probability of beingcorrect
for distinguishing two models is given by
We wish to determine the optimum decision region
placement.Expressing the probability of being correct in terms of the
likelihood functions
, the
a priori probabilities and
the decision regions, we have
We want to maximize
by selecting the decision regions
and
. Mimicking the ideas of the previous section, we associate each value of
with the largest integral in the expression for
. Decision region
, for example, is defined by the collection of values
of
for which the first term is largest. As all of the
quantities involved are non-negative, the decision rulemaximizing the probability of a correct decision is
Given
, choose
for which the product
is largest.
When we must select among more than two models, this result still applies (prove this for yourself). Simple manipulations lead to the likelihood ratio test when we must decide between two models.
Note that if the Bayes' costs were chosen so that
and
, (
), the Bayes' cost and the maximum-probability-correct thresholds would be the same.
To evaluate the quality of the decision rule, we usually
compute the
probability of error
rather than the probability of being correct. This
quantity can be expressed in terms of the observations, thelikelihood ratio, and the sufficient statistic.
These expressions point out that the likelihood
ratio and the sufficient statistic can each be considered afunction of the observations
; hence, they are random variables and have
probability densities for each model.When the likelihood ratio is non-monotonic, the
first expression is most difficult to evaluate. Whenmonotonic, the middle expression often proves to be the most difficult.No matter how it is calculated,
no other
decision rule can yield a smaller probability oferror . This statement is obvious as we minimized
the probability of error implicitly by maximizing the probability of being correct because
.