<< Chapter < Page | Chapter >> Page > |
Throughout this module, let denote the input to a decision-making process and denote the correct response or output (e.g., the value of a parameter, the label of a class, the signal of interest). We assume that and are random variables or random vectors with joint distribution , where and denote specific values that may be taken by the random variables and , respectively. The observation is used to make decisions pertaining to the quantity of interest. For thepurposes of illustration, we will focus on the task of determining the value of the quantity of interest. A decision rule for this task is a function that takes the observation as input and outputs a prediction of the quantity . We denote a decision rule by or , when we wish to indicate explicitly the dependence of the decision rule on the observation. Wewill examine techniques for designing decision rules and for analyzing their performance.
The accuracy of a decision is measured with a loss function. For example, if our goal is to determine the value of , then a loss function takes as inputs the true value and the predicted value (the decision) and outputs a non-negative real number (the “loss”) reflective of theaccuracy of the decision. Two of the most commonly encountered loss functions include:
The 0/1 loss is commonly used in detection and classification problems, and the squared error loss is more appropriate for problemsinvolving the estimation of a continuous parameter. Note that since the inputs to the loss function may be random variables, so is the loss.
A risk is a function of the decision rule , and is defined to be the expectation of a loss with respect to the jointdistribution . For example, the expected 0/1 loss produces the probability of error risk function; i.e., a simply calculation shows that . The expected squared error loss produces the mean squared error MSE risk function, .
Optimal decisions are obtained by choosing a decision rule that minimizes the desired risk function. Given complete knowledge of theprobability distributions involved (e.g., ) one can explicitly or numerically design an optimal decision rule, denoted , that minimizes the risk function.
The conditional distribution of the observation given the quantity of interest is denoted by . The conditional distribution can be viewed as a generative model, probabilistically describing the observations resulting from a givenvalue, , of the quantity of interest. For example, if is the value of a parameter, the is the probability distribution of the observation when the parameter value is set to . If is a continuous random variable with conditional density or a discrete random variable with conditional probability mass function (pmf) , then given a value we can assess the probability of a particular measurment value by the magnitude of either the conditional density or pmf.
Notification Switch
Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?