This optimization problem can be solved using
Lagrange
multipliers ; we seek to find the decision rule that
maximizes
where
is a positive Lagrange multiplier. This optimization
technique amounts to finding the decision rule that maximizes
, then finding the value of the multiplier that
allows the criterion toinge the detection probability in competition with false-alrm probabilitiesin excess of the criterion value. As is usual in the
derivation of optimum decision rules, we maximize thesequantities with respect to the decision regions. Expressing
and
in terms of them, we have
To maximize this quantity with respect to
, we need only to integrate over those regions of
where the integrand is positive). The region
thus corresponds to those values of
where
and the resulting decision rule is
The ubiquitous likelihood ratio test again appears;
it
is indeed the fundamental quantity in
hypothesis testing. Using either the logarithm of the likelihoodratio or the sufficient statistic, this result can be
expressed as
or
We have not as yet found a value for the threshold. The
false-alarm probability can be expressed in terms of theNeyman-Pearson threshold in two (useful) ways.
One of these implicit equations must be solved for
the threshold by setting
equal to
. The selection of which to use is usually based on
pragmatic considerations: the easiest to compute. From theprevious discussion of the relationship between the detection
and false-alarm probabilities, we find that to maximize
we must allow
to be as large as possible while remaining less than
. Thus, we want to find the
smallest value of
consistent with the
constraint. Computation of the threshold isproblem-dependent, but a solution always exists.
An important application of the likelihood ratio test occurs
when
is a Gaussian random vector for each model.
Suppose the models correspond to Gaussian random vectorshaving different mean values but sharing the same
covariance.
-
:
-
:
is of dimension
and has statistically independent, equi-variance
components. The vector of means
distinguishes the two models. The likelihood
functions associated this problem are
The likelihood ratio
becomes
This expression for the likelihood ratio is
complicated. In the Gaussian case (and many others), we usethe logarithm the reduce the complexity of the likelihood
ratio and form a sufficient statistic.
The likelihood ratio test then has the much
simpler, but equivalent form
To focus on the model evaluation aspects of this
problem, let's assume the means equal each other and are a positive constant:
.
What would happen if the mean were
negative?
We now have
Note that all that need be known about the observations
is their sum. This quantity is the sufficient
statistic for the Gaussian problem:
and
.
When trying to compute the probability of error or the
threshold in the Neyman-Pearson criterion, we must find theconditional probability density of one of the decision
statistics: the likelihood ratio, the log-likelihood, or thesufficient statistic. The log-likelihood and the sufficient
statistic are quite similar in this problem, but clearly weshould use the latter. One practical property of the
sufficient statistic is that it usually simplifiescomputations. For this Gaussian example, the sufficient
statistic is a Gaussian random variable under each model.
-
:
-
:
To find the probability of error from
, we must evaluate the area under a
Gaussian probability density function. These integrals aresuccinctly expressed in terms of
, which denotes the probability that a
unit-variance, zero-mean Gaussian random variable exceeds
.
As
, the probability of error can be written as
An interesting special case occurs when
. In this case,
and the probability of error becomes
As
is a monotonically decreasing function, the
probability of error decreases with increasing values of theratio
. However, as shown in
,
decreases in a nonlinear fashion. Thus,
increasing
by a factor of two may decrease the probability of
error by a larger
or a smaller factor;
the amount of change depends on the initial value of theratio.
To find the threshold for the Neyman-Pearson test from the
expressions given on
, we
need the area under a Gaussian density.
As
is a monotonic and continuous function, we can set
equal to the criterion value
with the result
where
denotes the inverse function of
. The solution of this equation cannot
be performed analytically as no closed form expressionexists for
(much less its inverse function).
The criterionvalue must be found from tables or numerical routines.
Because Gaussian problems arise frequently, the
accompanying table provides
numeric values for this quantity at the decade points.
|
|
|
1.281 |
|
2.396 |
|
3.090 |
|
3.719 |
|
4.265 |
|
4.754 |
The table displays interesting values for
that can be used to determine thresholds in
the Neyman-Pearson variant of the likelihood ratio test.Note how little the inverse function changes for decade
changes in its argument;
is indeed
very nonlinear.
The detection probability of the Neyman-Pearson decision rule is given by