<< Chapter < Page | Chapter >> Page > |
We ended the previous lecture with a brief discussion of overfitting. Recall that, given a set of data points, , and a space of functions (or models ) , our goal in solving the learning from data problem is to choose a function which minimizes the expected risk , where the expectation is being taken over the distribution on the data points . One approach to avoiding overfitting is to restrict to some subset of all measurable function. To gauge the performance of a given in this case, we examine the difference between the expected risk of and the Bayes' risk (called the excess risk ).
The approximation error term quantifies the performance hit incurred by imposing restrictions on . The estimation error term is due to the randomness of the training data, and it expresses how well the chosen function will perform in relation to the best possible in the class . This decomposition into stochastic and approximation errors is similar to the bias-variancetradeoff which arises in classical estimation theory. The approximation error is like a bias squared term, and the estimationerror is like a variance term. By allowing the space to be large When we say is large, we mean that , the number of elements in , is large. we can make the approximation error as small as we want at the cost of incurring a large estimationerror. On the other hand, if is very small then the approximation error will be large, but the estimation error may be very small.This tradeoff is illustrated in [link] .
Why is this the case? We do not know the true distribution on the data, so instead of minimizing the expected risk of we design a predictor by minimizing the empirical risk:
If is very large then can be made arbitrarily small and the resulting can “overfit” to the data since is not a good estimator of the true risk .
The behavior of the true and empirical risks, as a function of the size (or complexity ) of the space , is illustrated in [link] . Unfortunately, we can't easily determine whether we are over or underfitting just by looking at the empirical risk.
Picking
is problematic if is large. We will examine two general approaches to dealing with this problem:
Notification Switch
Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?