<< Chapter < Page Chapter >> Page >

This optimization problem can be solved using Lagrange multipliers (see Constrained Optimization ); we seek to find the decision rule that maximizes F P D P F where is the Lagrange multiplier. This optimization technique amounts to finding the decision rule that maximizes F , then finding the value of the multiplier that allows the criterion to be satisfied. As is usual in thederivation of optimum decision rules, we maximize these quantities with respect to the decision regions. Expressing P D and P F in terms of them, we have

F r 1 p r 1 r r 1 p r 0 r r 1 p r 1 r p r 0 r
To maximize this quantity with respect to 1 , we need only to integrate over those regions of r where the integrand is positive. The region 1 thus corresponds to those values of r where p r 1 r p r 0 r and the resulting decision rule is p r 1 r p r 0 r 0 1 The ubiquitous likelihood ratio test again appears; it is indeed the fundamental quantity in hypothesis testing. Using the logarithm of the likelihoodratio or the sufficient statistic, this result can be expressed as either r 0 1 or r 0 1

We have not as yet found a value for the threshold. The false-alarm probability can be expressed in terms of theNeyman-Pearson threshold in two (useful) ways.

P F p 0 p 0
One of these implicit equations must be solved for the threshold by setting P F equal to . The selection of which to use is usually based on pragmatic considerations: the easiest to compute. From theprevious discussion of the relationship between the detection and false-alarm probabilities, we find that to maximize P D we must allow to be as large as possible while remaining less than . Thus, we want to find the smallest value of (note the minus sign) consistent with the constraint. Computation of the threshold isproblem-dependent, but a solution always exists.

An important application of the likelihood ratio test occurs when r is a Gaussian random vector for each model. Suppose the models correspond to Gaussian random vectorshaving different mean values but sharing the same identity covariance.

  • 0 : r 0 2 I
  • 1 : r m 2 I
Thus, r is of dimension L and has statistically independent, equal variance components. The vector of means m m 0 m L 1 distinguishes the two models. The likelihood functions associated this problem are p r 0 r l 0 L 1 1 2 2 1 2 r l 2 p r 1 r l 0 L 1 1 2 2 1 2 r l m l 2 The likelihood ratio r becomes r l 0 L 1 1 2 r l m l 2 l 0 L 1 1 2 r l 2 This expression for the likelihood ratio is complicated. In the Gaussian case (and many others), we usethe logarithm the reduce the complexity of the likelihood ratio and form a sufficient statistic.
r l 0 L 1 -1 2 r l m l 2 2 1 2 r l 2 2 1 2 l 0 L 1 m l r l 1 2 2 l 0 L 1 m l 2
The likelihood ratio test then has the much simpler, but equivalent form l 0 L 1 m l r l 0 1 2 1 2 l 0 L 1 m l 2 To focus on the model evaluation aspects of this problem, let's assume means be equal to a positive constant: m l m ( 0 ).
Why did the authors assume that the mean was positive? What would happen if it werenegative?
l 0 L 1 r l 0 1 2 m L m 2 Note that all that need be known about the observations r l is their sum. This quantity is the sufficient statistic for the Gaussian problem: r r l and 2 m L m 2 .

When trying to compute the probability of error or the threshold in the Neyman-Pearson criterion, we must find theconditional probability density of one of the decision statistics: the likelihood ratio, the log-likelihood, or thesufficient statistic. The log-likelihood and the sufficient statistic are quite similar in this problem, but clearly weshould use the latter. One practical property of the sufficient statistic is that it usually simplifiescomputations. For this Gaussian example, the sufficient statistic is a Gaussian random variable under each model.

  • 0 : r 0 L 2
  • 1 : r L m L 2
To find the probability of error from , we must evaluate the area under aGaussian probability density function. These integrals are succinctly expressed in terms of Q x , which denotes the probability that a unit-variance, zero-mean Gaussian random variable exceeds x (see Probability and Stochastic Processes ). As 1 Q x Q x , the probability of error can be written as P e 1 Q L m L 0 Q L An interesting special case occurs when 0 1 2 1 . In this case, L m 2 and the probability of error becomes P e Q L m 2 As Q is a monotonically decreasing function, the probability of error decreases with increasing values of theratio L m 2 . However, as shown in this figure , Q decreases in a nonlinear fashion. Thus, increasing m by a factor of two may decrease the probability of error by a larger or a smaller factor; the amount of change depends on the initial value of theratio.

To find the threshold for the Neyman-Pearson test from the expressions given on , we need the area under a Gaussian density.

P F Q L 2
As Q is a monotonic and continuous function, we can now set equal to the criterion value with the result L Q where Q denotes the inverse function of Q . The solution of this equation cannot be performed analytically as no closed form expressionexists for Q (much less its inverse function); the criterion value must be found from tables or numerical routines.Because Gaussian problems arise frequently, the accompanying table provides numeric values for this quantity at the decade points.
The table displays interesting values for Q that can be used to determine thresholds in the Neyman-Pearson variant of the likelihood ratio test.Note how little the inverse function changes for decade changes in its argument; Q is indeed very nonlinear.
x Q x
10 -1 1.281
10 -2 2.396
10 -3 3.090
10 -4 3.719
10 -5 4.265
10 -6 4.754
The detection probability is given by P D Q Q L m

Questions & Answers

what are components of cells
ofosola Reply
twugzfisfjxxkvdsifgfuy7 it
Sami
58214993
Sami
what is a salt
John
the difference between male and female reproduction
John
what is computed
IBRAHIM Reply
what is biology
IBRAHIM
what is the full meaning of biology
IBRAHIM
what is biology
Jeneba
what is cell
Kuot
425844168
Sami
what is biology
Inenevwo
what is cytoplasm
Emmanuel Reply
structure of an animal cell
Arrey Reply
what happens when the eustachian tube is blocked
Puseletso Reply
what's atoms
Achol Reply
discuss how the following factors such as predation risk, competition and habitat structure influence animal's foraging behavior in essay form
Burnet Reply
cell?
Kuot
location of cervical vertebra
KENNEDY Reply
What are acid
Sheriff Reply
define biology infour way
Happiness Reply
What are types of cell
Nansoh Reply
how can I get this book
Gatyin Reply
what is lump
Chineye Reply
what is cell
Maluak Reply
what is biology
Maluak
what is vertibrate
Jeneba
what's cornea?
Majak Reply
what are cell
Achol
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical signal processing. OpenStax CNX. Dec 05, 2011 Download for free at http://cnx.org/content/col11382/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?

Ask