<< Chapter < Page | Chapter >> Page > |
So all the points closer to the blue cross points are painted blue and so on. The other step is updating the cluster centroids and so I’m going to repeatedly look at all the points that I’ve painted blue and compute the average of all of the blue dots, and I’ll look at all the red dots and compute the average of all the red dots and then I move the cluster centroids as follows to the average of the respective locations. So this is now [inaudible] of k-means on here, and now I’ll repeat the same process. I look at all the points and assign all the points closer to the blue cross to the color blue and similarly red. And so now I have that assignments of points to the cluster centroids and finally, I’ll again compute the average of all the blue points and compute the average of all the red points and update the cluster centroids again as follows and now k-means is actually [inaudible]. If you keep running these two sets of k-means over and over, the cluster centroids and the assignment of the points closest to the cluster centroids will actually remain the same. Yeah.
Student: [Inaudible]
Instructor (Andrew Ng) :Yeah, I’ll assign that in a second. Yeah. Okay. So [inaudible]. Take a second to look at this again and make sure you understand how the algorithm I wrote out maps onto the animation that we just saw. Do you have a question?
Student: [Inaudible]
Instructor (Andrew Ng) :I see. Okay. Let me answer on that in a second. Okay. So these are the two steps. This step 2.1 was assigning the points to the closest centroid and 2.2 was shifting the cluster centroid to be the mean of all the points assigned to that cluster centroid. Okay. Okay. [Inaudible] questions that we just had, one is, does the algorithm converge? The answer is yes, k-means is guaranteed to converge in a certain sense. In particular, if you define the distortion function to be J of C [inaudible]squared. You can define the distortion function to be a function of the cluster assignments, and the cluster centroids and [inaudible] square distances, which mean the points and the cluster centroids that they’re assigned to, then you can show – I won’t really prove this here but you can show that k-means is called [inaudible]on the function J. In particular, who remembers, it’s called in a sense as an authorization algorithm, I don’t know, maybe about two weeks ago, so called in a sense is the algorithm that we’ll repeatedly [inaudible] with respect to C. Okay. So that’s called [inaudible]. And so what you can prove is that k-means – the two steps of k-means, are exactly optimizing this function with respect to C and will respect a new alternately. And therefore, this function, J of C, new, must be decreasing monotonically on every other variation and so the sense in which k-means converges is that this function, J of C, new, can only go down and therefore, this function will actually eventually converge in the sense that it will stop going down.
Okay. It’s actually possible that there may be several clustering’s they give the same value of J of C, new and so k-means may actually switch back and forth between different clustering’s that they [inaudible] in the extremely unlikely case, if there’s multiple clustering’s, they give exactly the same value for this objective function. K-means may also be [inaudible]it’ll just never happen. That even if that happens, this function J of C, new will converge. Another question was how do you choose the number of clusters? So it turns out that in the vast majority of time when people apply k-means, you still just randomly pick a number of clusters or you randomly try a few different numbers of clusters and pick the one that seems to work best. The number of clusters in this algorithm instead of just one parameters, so usually I think it’s not very hard to choose automatically. There are some automatic ways of choosing the number of clusters, but I’m not gonna talk about them. When I do this, I usually just pick of the number of clusters randomly. And the reason is I think for many clustering problems the “true” number of clusters is actually ambiguous so for example if you have a data set that looks like this, some of you may see four clusters, right, and some of you may see two clusters, and so the right answer for the actual number of clusters is sort of ambiguous. Yeah.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?