<< Chapter < Page | Chapter >> Page > |
From a grander viewpoint, these expressions represent an achievable lower bound on performance (as assessed by the probability oferror). Furthermore, this probability will be non-zero if the conditional densities overlap over some range of values of , such as occurred in the previous example. Within regions of overlap, the observed values are ambiguous: eithermodel is consistent with the observations. Our "optimum" decision rule operates in such regions by selecting that modelwhich is most likely (has the highest probability) of generating the measured data.
Situations occur frequently where assigning or measuring the
Using nomenclature from radar, where model represents the presence of a target and its absence, the various types of correct and incorrect decisions have the following names.
These two probabilities are related to each other in an interesting way. Expressing these quantities in terms of thedecision regions and the likelihood functions, we have As the region shrinks, both of these probabilities tend toward zero; as expands to engulf the entire range of observation values, they both tend toward unity. This rather directrelationship between and does not mean that they equal each other; in most cases, as expands, increases more rapidly than (we had better be right more often than we are wrong!). However, the "ultimate" situation where a rule isalways right and never wrong ( , ) cannot occur when the conditional distributions overlap. Thus, to increase the detection probability we mustalso allow the false-alarm probability to increase. This behavior represents the fundamental tradeoff in detection theory .
One can attempt to impose a performance criterion that depends
only on these probabilities with the consequent decision rulenot depending on the
Notification Switch
Would you like to follow the 'Elements of detection theory' conversation and receive update notifications?