<< Chapter < Page | Chapter >> Page > |
To formulate the basic learning from data problem, we must specify several basic elements: data spaces, probability measures, loss functions, andstatistical risk.
Learning from data begins with a specification of two spaces:
The input space is also sometimes called the “feature space” or “signal domain.” The output space is also called the “class label space,”“outcome space,” “response space,” or “signal range.”
A classic example is estimating a signal in noise:
where is a random sample point on the real line and is a noise independent of .
Define a joint probability distribution on denoted . Let denote a pair of random variables distributed according to . We will also have use for marginal and conditional distributions. Let denote the marginal distribution on , and let denote the conditional distribution of given . For any distribution , let denote its density function with respect to the corresponding dominating measure; e.g., Lebesgue measure for continuous random variables or counting measure for discrete random variables.
Define the expectation operator:
We will also make use of corresponding marginal and conditional expectations such as and .
Wherever convenient and obvious based on context, we may drop the subscripts (e.g., instead of ) for notational ease.
A loss function is a mapping
In binary classification problems, . The loss function is usually used: where is the indicator function which takes a value of 1 if condition is true and zero otherwise. We typically will compare a true label with a prediction , in which case the loss simply counts misclassifications.
In regression or estimation problems, . The squared error loss function is often employed: the square of the difference between and . In application, we are interested in a true value in comparison to an estimate .
The basic problem in learning is to determine a mapping that takes an input and predicts the corresponding output . The performance of a given map is measured by its expected loss or risk :
The risk tells us how well, on average, the predictor performs with respect to the chosen loss function. A key quantity of interestis the mininum risk value, defined as
where the infinum is taking over all measurable functions.
Suppose that are distributed according to ( for short). Our goal is to find a map so that with high probability. Ideally, we would chose to minimize the risk . However, in order to compute the risk (and hence optimize it) we need to know the jointdistribution . In many problems of practical interest, the joint distribution is unknown, and minimizing the risk is notpossible.
Suppose that we have some exemplary samples from the distribution. Specifically, consider samples distributed independently and identically (iid) according to the otherwise unknown . Let us call these samples training data , and denote the collection by . Let's also define a collection of candidate mappings . We will use the training data to pick a mapping that we hope will be a good predictor. This is sometimes called the Model Selection problem. Note that the selected model is a function of the training data:
which is what the subscript in refers to. The risk of is given by
Note that since depends on in addition to a new random pair , the risk is a random variable (i.e., a function of the training data ). Therefore, we are interested in the expected risk , computed over random realizations of the training data:
We hope that produces a small expected risk.
The notion of expected risk can be interpreted as follows. We would like to define an algorithm (a model selection process) that performswell on average, over any random sample of training data. The expected risk is a measure of the expected performance of thealgorithm with respect to the chosen loss function. That is, we are not gauging the risk of a particular map , but rather we are measuring the performance of the algorithm that takes any realizationof training data and selects an appropriate model in .
This course is concerned with determining “good” model spaces and useful and effective model selection algorithms.
Notification Switch
Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?