<< Chapter < Page | Chapter >> Page > |
Revisit the polynomial regression example (Lecture 2, Ex. 4) , and incorporate a penalty term which is proportional to the degree of , or the derivative of . In essence, this approach penalizes for functions which are too “wiggly”, with theintuition being that the true function is probably smooth so a function which is very wiggly will overfit the data.
How do we decide how to restrict or penalize the empirical risk minimization process? Approaches which have appeared in theliterature include the following.
Perhaps the simplest approach is to try to limit the size of in a way that depends on the number of training data . The more data we have, the more complex the space of models we can entertain.Let the class of candidate functions grow with . That is, take
where grows as . In other words, consider a sequence of spaces with increasing complexity ordegrees of freedom depending on the number of training data samples, .
Given samples i.i.d. distributed according to , select to minimize the empirical risk
In the next lecture we will consider an example using the method of sieves. The basic idea is to design the sequence of model spaces in such a waythat the excess risk decays to zero as . This sort of idea has been around for decades, but Grenander's method ofsieves is often cited as a nice formalization of the idea: Abstract Inference , Wiley, New York.
In certain cases, the empirical risk happens to be a (log) likelihood function, and one can then interpret the cost as reflecting prior knowledge about which models are more or less likely. In thiscase, is like a prior probability distribution on the space . The cost is large if is highly improbable, and is small if is highly probable.
Alternatively, if we restrict to be small, and denote the space of all measurable functions as , then it is essentially as if we have placed a uniform prior over all functions in , and zero prior probability on the functions in .
Description length methods represent each with a string of bits. More complicated functions require more bits to represent.Accordingly, we can then set the cost proportional to the number of bits needed to describe (the description length ). This results in what is known as the minimum description length (MDL)approach where the minimum description length is given by
In the Bayesian setting, can be interpreted as a prior probability density on , with more complex models being less probable and simpler models being more probable. In that sense,both the Bayesian and MDL approaches have a similar spirit.
The Vapnik-Cervonenkis (VC) dimension measures the complexity of a class relative to a random sample of training data. For example, take to be all linear classifiers in 2-dimensional feature space. Clearly, the space of linear classifiers isinfinite (there are an infinite number of lines which can be drawn in the plane). However, many of these linear classifiers would assignthe same labels to the training data.
Notification Switch
Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?