<< Chapter < Page Chapter >> Page >

In many circumstances, the observations to be used in evaluating models arrive sequentially rather than all at once. Forexample, passive sonar systems may well "listen" over a period of time to an array's output while the array is steered in aparticular direction. The decision rules we have derived implicitly assume the entire block of data - the array output observed over a long period of time - isavailable. You might wonder whether a hypothesis test could be developed that takes the sequential arrival of data intoaccount, making decisions as the data arrive, with the possibility of determining early in the data collection procedure the validity of one model, whilemaintaining the same performance specifications. Answering this question leads to theformulation of sequential hypothesis testing ( Poor: 136-156 , Wald ). Not only do sequential tests exist, they can provide performance superior to that of block tests in certain cases.

To make decisions as the data become available, we must generalize the decision-making process. Assume as before thatthe observed data comprise an observation vector r of length L . The decision rule (in the two-model case) now consists of determining which model is valid or that more data are required. Thus, the range of values of r is partitioned into three regions 0 , 1 , and ? . Making the latter decision implies that the data gathered to that point is insufficient to meet the performance requirements.More data must be obtained to achieve the required performance and the test re-applied once these additional data becomeavailable. Thus, a variable number of observations are required to make a decision. An issue in thiskind of procedure is the number of observations required to satisfy the performance criteria: for a common set ofperformance specifications, does this procedure result in a decision rule requiring, on the average, fewer observations thandoes a fixed-length block test?

Sequential likelihood ratio test

In a manner similar to the Neyman-Pearson criterion, we specify the false-alarm probability P F ; in addition, we need to specify the detection probability P D . These constraints over-specify the model evaluation problem where the number of observations is fixed: enforcing oneconstraint forces violation of the other. In contrast, both may be specified as the sequential test as we shall see.

Assuming a likelihood ratio test, two thresholds are required to define the three decision regions. L r 0 say 0 0 L r 1 say "need more data" 1 L r say 1 where L r is the usual likelihood ratio where the dimension L of the vector r is explicitly denoted. Thethreshold values 0 and 1 are found from the constraints, which are expressed as P F r 1 p r 0 r and P D r 1 p r 1 r Here, and are design constants that you choose according to the application. Note that theprobabilities P F , P D are associated not with what happens on a given trial, but what the sequential test yields in terms ofperformance when a decision is made. Thus, P M 1 P D although the probability of correctly saying 1 on a given trial does not equal one minus the probability of incorrectly saying 0 is true: The "need more data" region must be accounted for on an individual trial but not when considering the sequentialtest's performance when it terminates.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical signal processing. OpenStax CNX. Dec 05, 2011 Download for free at http://cnx.org/content/col11382/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?

Ask