<< Chapter < Page | Chapter >> Page > |
So once you solve for alpha and w, it's really easy to solve b. You can plug alpha and w back into the primal optimization problem and solve for b.
And I just wrote it down for the sake of completeness, but – and the intuition behind this formula is just that find the worst positive [inaudible] and the worst negative example. Let's say this one and this one – say [inaudible]and [inaudible] the difference between them. And that tells you where you should set the threshold for where to place the separating hyperplane.
And then that's the – this is the optimal margin classifier. This is also called a support vector machine. If you do not use one y [inaudible], it's called kernels. And I'll say a few words about that.
But I hope the process is clear. It's a duo problem. We're going to solve the duo problem for the alpha i's. That gives us w, and that gives us b.
So there's just one more thing I wanna point out as I lead into the next lecture, which is that – I'll just write this out again, I guess – which is that it turns out we can take the entire algorithm, and we can express the entire algorithm in terms of inner products. And here's what I mean by that.
So say that the parameters w is the sum of your input examples. And so we need to make a prediction. Someone gives you a new value of x. You want a value of the hypothesis on the value of x. That's given by g of w transpose x plus b, or where g was this threshold function that outputs minus 1 or plus 1. And so you need to compute w transpose x plus b. And that is equal to alpha i, yi.
And that can be expressed as a sum of these inner products between your training examples and this new value of x [inaudible] value [inaudible].
And this will lead into our next lecture, which is the idea of kernels. And it turns out that in the source of feature spaces where used to support vector machines – it turns out that sometimes your training examples may be very high-dimensional. It may even be the case that the features that you want to use are inner-dimensional feature vectors.
But despite this, it'll turn out that there'll be an interesting representation that you can use that will allow you to compute inner products like these efficiently. And this holds true only for certain feature spaces. It doesn't hold true for arbitrary sets of features.
But we talk about the idea of kernels. In the next lecture, we'll see examples where even though you have extremely high-dimensional feature vectors, you can compute – you may never want to represent xi, x plus [inaudible] inner-dimensional feature vector. You can even store in computer memory. But you will nonetheless be able to compute inner products between different [inaudible]feature vectors very efficiently. And so you can – for example, you can make predictions by making use of these inner products.
This is just xi transpose. You will compute these inner products very efficiently and, therefore, make predictions. And this pointed also – the other reason we derive the duo was because on this board, when we worked out what w of alpha is, w of alpha – actually are the same property – w of alpha is again written in terms of these inner products.
And so if you actually look at the duo optimization problem and step – for all the steps of the algorithm, you'll find that you actually do everything you want – learn the parameters of alpha. So suppose you do an optimization problem, go into parameters alpha, and you do everything you want without ever needing to represent xi directly. And all you need to do is represent this compute inner products with your feature vectors like these.
Well, one last property of this algorithm that's kinda nice is that I said previously that the alpha i's are 0 only for the – are non-0 only for the support vectors, only for the vectors that function y [inaudible] 1.
And in practice, there are usually fairly few of them. And so what this means is that if you're representing w this way, then w when represented as a fairly small fraction of training examples because mostly alpha i's is 0 – and so when you're summing up the sum, you need to compute inner products only if the support vectors, which is usually a small fraction of your training set. So that's another nice [inaudible] because [inaudible]alpha is 0.
And well, much of this will make much more sense when we talk about kernels.
[Inaudible] quick questions before I close? Yeah.
Student: It seems that for anything we've done the work, the point file has to be really well behaved, and if any of the points are kinda on the wrong side –
Instructor (Andrew Ng) :No, oh, yeah, so again, for today's lecture asks you that the data is linearly separable – that you can actually get perfect [inaudible]. I'll fix this in the next lecture as well. But excellent assumption.
Yes?
Student: So can't we assume that [inaudible] point [inaudible], so [inaudible] have [inaudible]?
Instructor (Andrew Ng) :Yes, so unless I – says that there are ways to generalize this in multiple classes that I probably won't [inaudible] – but yeah, that's generalization [inaudible].
Okay. Let's close for today, then. We'll talk about kernels in our next lecture.
[End of Audio]
Duration: 77 minutes
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?