<< Chapter < Page Chapter >> Page >

Neyman-pearson criterion

Situations occur frequently where assigning or measuring the a priori probabilities P i is unreasonable. For example, just what is the a priori probability of a supernova occurring in any particular region of the sky? We clearlyneed a model evaluation procedure which can function without a priori probabilities. This kind of test results when the so-called Neyman-Pearson criterion is used toderive the decision rule. The ideas behind and decision rules derived with the Neyman-Pearson criterion ( Neyman and Pearson ) will serve us well in sequel; their result is important!

Using nomenclature from radar, where model 1 represents the presence of a target and 0 its absence, the various types of correct and incorrect decisions have the following names ( Woodward, pp. 127-129 ).

In hypothesis testing, a false-alarm is known as a type I error and a miss a type II error .
  • Detection

    we say it's there when it is; P D Pr say 1 | 1 true
  • False-alarm

    we say it's there when it's not; P F Pr say 1 | 0 true
  • Miss

    we say it's not there when it is; P M Pr say 0 | 1 true
The remaining probability 0 true say 0 has historically been left nameless and equals 1 P F . We should also note that the detection and miss probabilities are related by P M 1 P D . As these are conditional probabilities, they do not depend on the a priori probabilities and the two probabilities P F and P D characterize the errors when any decision rule is used.

These two probabilities are related to each other in an interesting way. Expressing these quantities in terms of thedecision regions and the likelihood functions, we have P F r 1 p r 0 r P D r 1 p r 1 r As the region 1 shrinks, both of these probabilities tend toward zero; as 1 expands to engulf the entire range of observation values, they both tend toward unity. This rather directrelationship between P D and P F does not mean that they equal each other; in most cases, as 1 expands, P D increases more rapidly than P F (we had better be right more often than we are wrong!). However, the "ultimate" situation where a rule isalways right and never wrong ( P D 1 , P F 0 ) cannot occur when the conditional distributions overlap. Thus, to increase the detection probability we mustalso allow the false-alarm probability to increase. This behavior represents the fundamental tradeoff in hypothesistesting and detection theory.

One can attempt to impose a performance criterion that depends only on these probabilities with the consequent decision rulenot depending on the a priori probabilities. The Neyman-Pearson criterion assumes that the false-alarm probability is constrained to be less than orequal to a specified value while we attempt to maximize the detection probability P D . P F P F 1 P D A subtlety of the succeeding solution is that the underlying probability distribution functions may not becontinuous, with the result that P F can never equal the constraining value . Furthermore, an (unlikely) possibility is that the optimum value for the false-alarm probability is somewhat lessthan the criterion value. Assume, therefore, that we rephrase the optimization problem by requiring that the false-alarmprobability equal a value that is less than or equal to .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical signal processing. OpenStax CNX. Dec 05, 2011 Download for free at http://cnx.org/content/col11382/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?

Ask