<< Chapter < Page Chapter >> Page >

Usually, the likelihood is manipulated to derive a sufficient statistic. The resulting sequential decision rule is L r 0 L say 0 0 L L r 1 L say "need more data" 1 L L r say 1 Note that the thresholds 0 L and 1 L , derived from the thresholds 0 and 1 , usually depend on the number of observations used in the decision rule.

Let r be a Gaussian random vector as in our previous examples withstatistically independent components. 0 : r 0 2 I 1 : r m 2 I The mean vector m is assumed for simplicity to consist of equal positive values: M m m , m 0 . Using the previous derivations, our sequential test becomes l 0 L 1 r l 2 m 0 L m 2 say 0 2 m 0 L m 2 l 0 L 1 r l 2 m 1 L m 2 say "need more data" 2 m 1 L m 2 l 0 L 1 r l say 1 Starting with L 1 , we gather the data and compute the sum. The sufficient statistic will lie in the middle range betweenthe two thresholds until one of them is exceeded as shown in .

Example of the sequential likelihood ratio test. The sufficient statistic wanders between the two thresholdsin a sequential decision rule until one of them is crossed by the statistic. The number of observationsused to obtain a decision is L 0 .
The model evaluation procedure then terminates and the chosen model announced. Note how the thresholds depend onthe amount of data available (as expressed by L ). This variation typifies the sequential hypothesis tests.

Average number of required observations

The awake reader might wonder whether that the sequential likelihood ratio test just derived has the disturbing propertythat it may never terminate: can the likelihood ratio wander between the two thresholds forever? Fortunately, thesequential likelihood ratio test has been shown to terminate with probability one ( Wald ). Confident of eventual termination, we need to explore how manyobservations are required to meet performance specifications. The number of observations is variable, depending on theobserved data and the stringency of the specifications. The average number of observations required can be determined in the interesting case when theobservations are statistically independent.

Assuming that the observations are statistically independent and identically distributed, the likelihood ratio is equal tothe product of the likelihood ratios evaluated at each observation. Considering L 0 r , the logarithm of the likelihood ratio when a decision is made on observation L 0 , we have L 0 r l 0 L 0 1 r l where r l is the likelihood ratio corresponding to the l th observation. We seek an expression for L 0 , the expected value of the number of observations required to make the decision. To derive this quantity, weevaluate the expected value of the likelihood ratio when the decision is made. This value will usually vary with whichmodel is actually valid; we must consider both models separately. Using the laws of conditional expectation (see Joint Distributions ), we find that the expected value of L 0 r , assuming that model 1 was true, is given by 1 L 0 r 1 L 0 L 0 r The outer expected value is evaluated with respect to the probability distribution of L 0 ; the inner expected value is average value of the log-likelihood assuming that L 0 observations were required to choose model 1 . In the latter case, the log-likelihood is the sum of L 0 component log-likelihood ratios 1 L 0 L 0 r L 0 1 r l Noting that the expected value on the right is a constant with respect to the outer expected value, we find that 1 L 0 r 1 L 0 1 r l The average number of observations required to make a decision, correct or incorrect, assuming that 1 is true is thus expressed by 1 L 0 1 L 0 r 1 r l Assuming that the other model was true, we have the complementary result 0 L 0 0 L 0 r 0 r l

The numerator is difficult to calculate exactly but easily approximated; assuming that the likelihood ratio equals itsthreshold value when the decision is made, 0 L 0 r P F 1 1 P F 0 P F P D P F 1 P F 1 P D 1 P F 1 L 0 r P D 1 1 P D 0 P D P D P F 1 P D 1 P D 1 P F Note these expressions are not problem dependent; they depend only on the specified probabilities.The denominator cannot be approximated in a similar way with such generality; it must be evaluated for each problem.

In the Gaussian example we have been exploring, the log-likelihood of each component observation r l is given by r l m r l 2 m 2 2 2 The conditional expected values required to evaluate the expression for the average number of required observationsare 0 r l m 2 2 2 1 r l m 2 2 2 For simplicity, let's assume that the false-alarm and detection probabilities are symmetric (i.e. P F 1 P D ). The expressions for the average number of observations are equal foreach model and we have 0 L 0 1 L 0 f P F 2 m 2 where f P F is a function equal to 2 4 P F 1 P F P F . Thus, the number of observations decreases with increasing signal-to-noise ratio m and increases as the false-alarm probability is reduced.

Suppose we used a likelihood ratio test where all data were considered once and a decision made; how many observationswould be required to achieve a specified level of performance and how would this fixed number compare with theaverage number of observations in a sequential test? In this example, we find from our earlier calculations (see equation ) that P F Q L m 2 so that L 4 Q P F 2 2 m 2 The duration of the sequential and block tests depend on the signal-to-noise ratio in the same way;however, the dependence on the false-alarm probability is quite different. As depicted in the , the disparity between these quantities increases rapidly as the false alarm probabilitydecreases, with the sequential test requiring correspondingly fewer observations on the average .

The numbers of observations required by the sequential test (on the average) and by the block test for Gaussianobservations are proportional to 2 m 2 ; the coefficients of these expressions ( f P F and 4 Q P F 2 respectively) are shown.

We must not forget that these results apply to the average number of observations required to make a decision. Expressions for the distribution of thenumber of observations are complicated and depend heavily on the problem. When an extremely large number of observationare required to resolve a difficult case to the required accuracy, we are forced to truncate the sequential test, stopping when a specified number ofobservations have been used. A decision would then be made by dividing the region between the boundaries in half andselecting the model corresponding to the boundary nearest to the sufficient statistic. If this truncation point is largerthan the expected number, the performance probabilities will change little. "Larger" is again problem dependent;analytic results are few, leaving the option of computer simulations to estimate the distribution of the number ofobservations required for a decision.

Questions & Answers

what is defense mechanism
Chinaza Reply
what is defense mechanisms
Chinaza
I'm interested in biological psychology and cognitive psychology
Tanya Reply
what does preconceived mean
sammie Reply
physiological Psychology
Nwosu Reply
How can I develope my cognitive domain
Amanyire Reply
why is communication effective
Dakolo Reply
Communication is effective because it allows individuals to share ideas, thoughts, and information with others.
effective communication can lead to improved outcomes in various settings, including personal relationships, business environments, and educational settings. By communicating effectively, individuals can negotiate effectively, solve problems collaboratively, and work towards common goals.
it starts up serve and return practice/assessments.it helps find voice talking therapy also assessments through relaxed conversation.
miss
Every time someone flushes a toilet in the apartment building, the person begins to jumb back automatically after hearing the flush, before the water temperature changes. Identify the types of learning, if it is classical conditioning identify the NS, UCS, CS and CR. If it is operant conditioning, identify the type of consequence positive reinforcement, negative reinforcement or punishment
Wekolamo Reply
please i need answer
Wekolamo
because it helps many people around the world to understand how to interact with other people and understand them well, for example at work (job).
Manix Reply
Agreed 👍 There are many parts of our brains and behaviors, we really need to get to know. Blessings for everyone and happy Sunday!
ARC
A child is a member of community not society elucidate ?
JESSY Reply
Isn't practices worldwide, be it psychology, be it science. isn't much just a false belief of control over something the mind cannot truly comprehend?
Simon Reply
compare and contrast skinner's perspective on personality development on freud
namakula Reply
Skinner skipped the whole unconscious phenomenon and rather emphasized on classical conditioning
war
explain how nature and nurture affect the development and later the productivity of an individual.
Amesalu Reply
nature is an hereditary factor while nurture is an environmental factor which constitute an individual personality. so if an individual's parent has a deviant behavior and was also brought up in an deviant environment, observation of the behavior and the inborn trait we make the individual deviant.
Samuel
I am taking this course because I am hoping that I could somehow learn more about my chosen field of interest and due to the fact that being a PsyD really ignites my passion as an individual the more I hope to learn about developing and literally explore the complexity of my critical thinking skills
Zyryn Reply
good👍
Jonathan
and having a good philosophy of the world is like a sandwich and a peanut butter 👍
Jonathan
generally amnesi how long yrs memory loss
Kelu Reply
interpersonal relationships
Abdulfatai Reply
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical signal processing. OpenStax CNX. Dec 05, 2011 Download for free at http://cnx.org/content/col11382/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?

Ask