<< Chapter < Page | Chapter >> Page > |
Thus, is exactly the mean of the random variables that are drawn iid from a Bernoulli distribution with mean . Hence, we can apply the Hoeffding inequality, and obtain
This shows that, for our particular , training error will be close to generalization error with high probability, assuming is large. But we don't just want to guarantee that will be close to (with high probability) for just only one particular . We want to prove that this will be true for simultaneously for all . To do so, let denote the event that . We've already show that, for any particular , it holds true that . Thus, using the union bound, we have that
If we subtract both sides from 1, we find that
(The “ ” symbol means “not.”) So, with probability at least , we have that will be within of for all . This is called a uniform convergence result, because this is a bound that holds simultaneously for all (as opposed to just one) .
In the discussion above, what we did was, for particular values of and , give a bound on the probability that for some , . There are three quantities of interest here: , , and the probability of error; we can bound either one in terms of the other two.
For instance, we can ask the following question: Given and some , how large must be before we can guarantee that with probability at least , training error will be within of generalization error? By setting and solving for , [you should convince yourself this is the right thing to do!], we find that if
then with probability at least , we have that for all . (Equivalently, this shows that the probability that for some is at most .) This bound tells us how many training examples we need in order makea guarantee. The training set size that a certain method or algorithm requires in order to achieve a certain level of performance is also calledthe algorithm's sample complexity .
The key property of the bound above is that the number of training examples needed to make this guarantee is only logarithmic in , the number of hypotheses in . This will be important later.
Similarly, we can also hold and fixed and solve for in the previous equation, and show [again, convince yourself that this is right!]that with probability , we have that for all ,
Now, let's assume that uniform convergence holds, i.e., that for all . What can we prove about the generalization of our learning algorithm that picked ?
Define to be the best possible hypothesis in . Note that is the best that we could possibly do given that we are using , so it makes sense to compare our performance to that of . We have:
The first line used the fact that (by our uniform convergence assumption). The second used the fact that was chosen to minimize , and hence for all , and in particular . The third line used the uniform convergence assumption again, to show that . So, what we've shown is the following: If uniform convergence occurs,then the generalization error of is at most worse than the best possible hypothesis in !
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?