<< Chapter < Page Chapter >> Page >

The criterion used in the previous section - minimize the average cost of an incorrect decision - may seem to be acontrived way of quantifying decisions. Well, often it is. For example, the Bayesian decision rule depends explicitly on the a priori probabilities; a rational method of assigning values to these - either by experiment or through trueknowledge of the relative likelihood of each model - may be unreasonable. In this section, we develop alternative decisionrules that try to answer such objections. One essential point will emerge from these considerations: the fundamental nature of the decision rule does not change with choice ofoptimization criterion . Even criteria remote from error measures can result in the likelihood ratio test (see this problem ). Such results do not occur often in signal processing andunderline the likelihood ratio test's significance.

Maximum probability of a correct decision

As only one model can describe any given set of data (the models are mutually exclusive), the probability of beingcorrect P c for distinguishing two models is given by P c say 0 when 0 true say 1 when 1 true We wish to determine the optimum decision region placement Expressing the probability correct in terms of thelikelihood functions p r i r , the a priori probabilities, and the decision regions, P c r 0 0 p r 0 r r 1 1 p r 1 r We want to maximize P c by selecting the decision regions 0 and 0 . The probability correct is maximized by associating each value of r with the largest term in the expression for P c . Decision region 0 , for example, is defined by the collection of values of r for which the first term is largest. As all of the quantities involved are non-negative, the decision rulemaximizing the probability of a correct decision is

Given r , choose i for which the product i p r i r is largest.
Simple manipulations lead to the likelihood ratio test. p r 1 r p r 0 r 0 1 0 1 Note that if the Bayes' costs were chosen so that C i i 0 and C i j C , ( i j ), we would have the same threshold as in the previous section.

To evaluate the quality of the decision rule, we usually compute the probability of error P e rather than the probability of being correct. This quantity can be expressed in terms of the observations, thelikelihood ratio, and the sufficient statistic.

P e 0 r 1 p r 0 r 1 r 0 p r 1 r 0 p 0 1 p 1 0 p 0 1 p 1
When the likelihood ratio is non-monotonic, the first expression is most difficult to evaluate. Whenmonotonic, the middle expression proves the most difficult. Furthermore, these expressions point out that the likelihoodratio and the sufficient statistic can be considered a function of the observations r ; hence, they are random variables and have probability densities for each model. Another aspect of theresulting probability of error is that no other decision rule can yield a lower probability oferror . This statement is obvious as we minimized the probability of error in deriving the likelihood ratiotest. The point is that these expressions represent a lower bound on performance (as assessed by the probability oferror). This probability will be non-zero if the conditional densities overlap over some range of values of r , such as occurred in the previous example. In this region of overlap, the observed values are ambiguous: eithermodel is consistent with the observations. Our "optimum" decision rule operates in such regions by selecting that modelwhich is most likely (has the highest probability) of generating any particular value.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical signal processing. OpenStax CNX. Dec 05, 2011 Download for free at http://cnx.org/content/col11382/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?

Ask