<< Chapter < Page | Chapter >> Page > |
I.e. this is the probability that, if we now draw a new example from the distribution , will misclassify it.
Note that we have assumed that the training data was drawn from the same distribution with which we're going to evaluate our hypotheses (in the definition of generalization error). This issometimes also referred to as one of the PAC assumptions. PAC stands for “probably approximately correct,” which is a framework and set of assumptionsunder which numerous results on learning theory were proved. Of these, the assumption of training and testing on the same distribution, and the assumptionof the independently drawn training examples, were the most important.
Consider the setting of linear classification, and let . What's a reasonable way of fitting the parameters ? One approach is to try to minimize the training error, and pick
We call this process empirical risk minimization (ERM), and the resulting hypothesis output by the learning algorithm is . We think of ERM as the most “basic” learning algorithm, and it will be thisalgorithm that we focus on in these notes. (Algorithms such as logistic regression can also be viewed as approximations to empirical risk minimization.)
In our study of learning theory, it will be useful to abstract away from the specific parameterization of hypotheses and from issues suchas whether we're using a linear classifier. We define the hypothesis class used by a learning algorithm to be the set of all classifiers considered by it. For linearclassification, is thus the set of all classifiers over (the domain of the inputs) where the decision boundary is linear.More broadly, if we were studying, say, neural networks, then we could let be the set of all classifiers representable by some neural network architecture.
Empirical risk minimization can now be thought of as a minimization over the class of functions , in which the learning algorithm picks the hypothesis:
Let's start by considering a learning problem in which we have a finite hypothesis class consisting of hypotheses. Thus, is just a set of functions mapping from to , and empirical risk minimization selects to be whichever of these functions has the smallest training error.
We would like to give guarantees on the generalization error of . Our strategy for doing so will be in two parts: First, we will show that is a reliable estimate of for all . Second, we will show that this implies an upper-bound on the generalization error of .
Take any one, fixed, . Consider a Bernoulli random variable whose distribution is defined as follows. We're going to sample . Then, we set . I.e., we're going to draw one example, and let indicate whether misclassifies it. Similarly, we also define . Since our training set was drawn iid from , and the 's have the same distribution.
We see that the misclassification probability on a randomly drawn example—that is, —is exactly the expected value of (and ). Moreover, the training error can be written
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?