<< Chapter < Page | Chapter >> Page > |
In statistics, hypothesis testing is some times known as decision theory or simply testing. The key result around whichall decision theory revolves is the likelihood ratio test.
In a binary hypothesis testing problem, four possible outcomes can result. Model did in fact represent the best model for the data and the decision rule said it was (a correct decision) or saidit wasn't (an erroneous decision). The other two outcomes arise when model was in fact true with either a correct or incorrect decision made. The decision process operates by segmentingthe range of observation values into two disjoint decision regions and . All values of fall into either or . If a given lies in , for example, we will announce our decision
"model was true"; if in , model would be proclaimed. To derive a rational method of deciding which model best describes the observations, we needa criterion to assess the quality of the decision process. Optimizing this criterion will specify the decision regions.
The
Bayes' decision criterion seeks to minimize a
cost function associated with making a decision. Let
be the cost of mistaking model
for model
(
) and
the presumably smaller cost of correctly choosing
model
:
,
. Let
be the
The data processing operations are captured entirely by the likelihood ratio . Furthermore, note that only the value of the likelihood ratio relative to the threshold matters; to simplify the computation of thelikelihood ratio, we can perform any positively monotonic operations simultaneously on the likelihood ratio and the threshold without affecting thecomparison. We can multiply the ratio by a positive constant, add any constant, or apply a monotonically increasing functionwhich simplifies the expressions. We single one such function, the logarithm, because it simplifies likelihoodratios that commonly occur in signal processing applications. Known as the log-likelihood, we explicitlyexpress the likelihood ratio test with it as
As we shall see, if we use a different criterion other than the Bayes' criterion, the decision rule often involves thelikelihood ratio. The likelihood ratio is comprised of the quantities , termed the likelihood function , which is also important in estimation theory. It is thisconditional density that portrays the probabilistic model describing data generation. The likelihood functioncompletely characterizes the kind of "world" assumed by each model; for each model, we must specify the likelihood functionso that we can solve the hypothesis testing problem.
A complication, which arises in some cases, is that the sufficient statistic may not be monotonic. If monotonic, thedecision regions and are simply connected (all portions of a region can be reached without crossing into the other region). If not,the regions are not simply connected and decision region islands are created (see this problem ). Such regions usually complicate calculations of decision performance. Monotonic ornot, the decision rule proceeds as described: the sufficient statistic is computed for each observation vector and comparedto a threshold.
An instructor in a course in detection theory wants to determine if a particular student studied for his last test.The observed quantity is the student's grade, which we denote by . Failure may not indicate studiousness: conscientious students may fail the test. Define the modelsas
Notification Switch
Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?