<< Chapter < Page | Chapter >> Page > |
Student: So this ZI is just a label, like, an X or an O?
Instructor (Andrew Ng) :Yes. Basically. Any other questions? Okay. So if you knew the values of Z, the Z playing a similar role to the cross labels in Gaussian’s Discriminant Analysis, then you could use maximum likeliness estimation parameters. But in reality, we don’t actually know the values of the Zs. All we’re given is this unlabeled data set and so let me write down the specific bootstrap procedure in which the idea is that we’re going to use our model to try and guess what the values of Z is. We don’t know our Z, but we’ll just take a guess at the values of Z and we’ll then use some of the values of Z that we guessed to fit the parameters of the rest of the model and then we’ll actually iterate. And now that we have a better estimate for the parameters for the rest of the model, we’ll then take another guess for what the values of Z are. And then we’ll sort of use something like the maximum likeliness estimation to set even parameters of the model. So the algorithm I‘m gonna write down is called the EM Algorithm and it proceeds as follows. Repeat until convergence and the E set, we’re going to guess the values of the unknown ZIs and in particular, I’m going to set WIJ. Okay. So I’m going to compute the probability that ZI is equal to J. So I’m going to use the rest of the parameters in my model and then I’m gonna compute the probability that point XI came from Gaussian number J. And just to be sort of concrete about what I mean by this, this means that I’m going to compute P of XI.
This step is sort of [inaudible], I guess. And again, just to be completely concrete about what I mean about this, the [inaudible]rate of P of XI given ZI equals J, you know, well that’s the Gaussian density. Right? That’s one over E to the – [inaudible] and then divided by sum from O equals 1 to K of [inaudible]of essentially the same thing, but with J replaced by L. Okay. [Inaudible] for the Gaussian and the numerator and the sum of the similar terms of the denominator. Excuse me. This is the sum from O equals 1 through K in the denominator. Okay. Let’s see. The maximization step where you would then update your estimates of the parameters. So I’ll just lay down the formulas here. When you see these, you should compare them to the formulas we had for maximum likelihood estimation. And so these two formulas on top are very similar to what you saw for Gaussian Discriminant Analysis except that now, we have these [inaudible]so WIJ is – you remember was the probability that we computed that point I came from Gaussian’s. I don’t want to call it cluster J, but that’s what – point I came from Gaussian J, rather than an indicator for where the point I came from Gaussian J. Okay. And the one slight difference between this and the formulas who have a Gaussian’s Discriminant Analysis is that in the mixture of Gaussian’s, we more commonly use different covariant [inaudible] for the different Gaussian’s.
So in Gaussian’s Discriminant Analysis, sort of by convention, you usually model all of the crosses to the same covariant matrix sigma. I just wrote down a lot of equations. Why don’t you just take a second to look at this and make sure it all makes sense? Do you have questions about this? Raise your hand if this makes sense to you? [Inaudible]. Okay. Only some of you. Let’s see. So let me try to explain that a little bit more. Some of you recall that in Gaussian’s Discriminant Analysis, right, if we knew the values for the ZIs so let’s see. Suppose I was to give you labeled data sets, suppose I was to tell you the values of the ZIs for each example, then I’d be giving you a data set that looks like this. Okay. So here’s my 1 D data set. That’s sort of a typical 1 D Gaussian’s Discriminant Analysis. So for Gaussian’s Discriminant Analysis we figured out the maximum likeliness estimation and the maximum likeliness estimate for the parameters of GDA, and one of the estimates for the parameters for GDA was [inaudible]which is the probability that ZI equals J. You would estimate that as sum of I equals sum of I from 1 to M indicator ZI equals J and divide by N. Okay. When we’re deriving GDA, [inaudible]. If you knew the cross labels for every example you cross, then this was your maximum likeliness estimate for the chance that the labels came from the positive [inaudible]versus the negative [inaudible]. It’s just a fraction of examples.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?