<< Chapter < Page | Chapter >> Page > |
Okay. So, welcome back. And before I get into this lecture’s technical material, I’ll just say this week’s discussion section will be the TA’s again talking about convex optimization. So at the last week’s discussion section they discussed total convex optimization. And this week they’ll wrap up the material they have to present on convex optimization.
So what I want to do today in this lecture is talk a little bit more about learning theory. In particular, I’ll talk about VC dimension and building on the issues of bias variance tradeoffs of under fitting and over fitting; that we’ve been seeing in the previous lecture, and then we’ll see in this one. I then want to talk about model selection algorithms for automatically making decisions for this bias variance tradeoff, that we started to talk about in the previous lecture. And depending on how much time, I actually may not get to Bayesian, [inaudible]. But if I don’t get to this today, I’ll get to this in next week’s lecture.
To recap: the result we proved at the previous lecture was that if you have a finite hypothesis class – if h is a set of k hypotheses, and suppose you have some fixed parameters, gamma and delta, then in order to guarantee that this holds, we’re probability at least one minus delta. It suffices that n is greater and equal to that; okay? And using big-O notations, just learning dropped constants, I can also write this as that; okay? So just to quickly remind you of what all of the notation means, we talked about empirical risk minimization, which was the simplified modern machine learning that has a hypothesis class of script h.
And what the empirical risk minimization-learning algorithm does is it just chooses the hypothesis that attains the smallest error on the training set. And so this symbol, epsilon, just denoted generalization error; right? This is the probability of a hypothesis h [inaudible] misclassifying a new example drawn from the same distribution as the training set. And so this says that in order to guarantee that the generalization error of the hypothesis h [inaudible]output by empirical risk minimization – that this is less and equal to the best possible generalization error – use it in your hypothesis class plus two times gamma – two times this error threshold. We want to guarantee that this holds a probability at least one minus delta. We show that it suffices for your training set size m to be greater than equal to this; okay? One over two gamma square log two k over delta; where again, k is the size of your hypothesis class.
And so this is some complexity result because it gives us a bound in the number of training examples we need in order to give a guarantee on something – on the error; okay? So this is a sample complexity result. So what I want to do now is take this result, and try to generalize it to the case of infinite hypothesis classes. So here, we said that the set script h is sort of just k specific functions, when you want to use a model like logistic regression, which is actually parameterized by real numbers. So I’m actually first going to give an argument that’s sort of formally broken – just sort of technically somewhat broken, but conveys useful intuition. And then I’ll give the more correct argument, but without proving. It’s as if, full proof is somewhat involved.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?