<< Chapter < Page | Chapter >> Page > |
And in particular, QI of ZI is going to be – is this Gaussian distribution – is Gaussian with mean given by mu of ZI given XI and covariance sigma ZI given XI. And so it’s true that mu and sigma may themselves have depended on the values of the parameters I had in the previous iteration of EM, but the way to think about Q is I’m going to take the parameters from the previous iteration of the algorithm and use that to compute what QI of ZI is. And that’s the E second EM algorithm. And then once I’ve computed what QI of Z is, then this is a fixed distribution. I’m gonna use these fixed values for mu and sigma, and just keep these two values fixed as I run the M step.
Student: So that’s – I guess I was confused because in the second point over there’s a lot of – it looks like they’re parameters, but I guess they’re old iterations of the parameters.
Instructor (Andrew Ng) :Oh, yeah. Yes, you’re right. When I wrote down QI of ZI that was a function of – so yeah – the parameters from the previous iteration. And I want to compute the new set of parameters. Okay. More questions? So this is probably the most math I’ll ever do in a lecture in this entire course. Let’s now talk about a different algorithm. Actually, which board was I on? So what I want to do now is talk about an algorithm called principal components analysis, which is often abbreviated PCA. Here’s the idea. PCA has a very similar idea as factor analysis, but it sort of maybe gets to the problem a little more directly than just factor analysis.
So the question is given – so we’re still doing unsupervised learning, so given a training set of M examples where each XI is an N-dimensional vector as usual, what I like to do is reduce it to a lower dimensional data set where K is strictly less than N, and quite often will be much smaller than N. So I’ll give a couple examples of why we want to do this. Imagine that you’re given a data set that contains measurements and unknown to you – measurements of, I don’t know, people or something – and unknown to you, whoever collected this data actually included the height of the person in centimeters as well as the height of the person in inches. So because of rounding off to the nearest centimeter or rounding off to the nearest inch the values won’t exactly match up, but along two dimensions of this data anyway, it’ll lie extremely close to a straight line, but it won’t lie exactly on a straight line because of rounding off to the nearest centimeter or inch, but lie very close to the straight line.
And so we have a data set like this. It seems that what you really care about is that axis, and this axis is really the variable of interest. That’s maybe the closest thing you have to the true height of a person. And this other axis is just noise. So if you can reduce the dimension of this data from two-dimensional to one-dimensional, then maybe you can get rid of some of the noise in this data. Quite often, you won’t know that this was cm and this was inches. You may have a data set with a hundred attributes, but you just didn’t happen to notice one was cm and one was inches. Another example that I sometimes think about is some of you know that my students and I work with [inaudible] helicopters a lot. So we imagined that you take surveys of quizzes, measurements of radio control helicopter pilots. Maybe one axis you have measurements of your pilot’s skill, how skillful your helicopter pilot is, and on another axis maybe you measure how much they actually enjoy flying. And maybe – this is really – maybe this is actually roughly one-dimensional data, and there’s some variable of interest, which I’ll call maybe pilot attitude that somehow determines their skill and how much they enjoy flying. And so again, if you can reduce this data from two dimensions to one-dimensional, maybe you’d have a slightly better of measure of what I’m calling loosely pilot attitude, which may be what you really wanted to [inaudible]. So let’s talk about an algorithm to do this, and I should come back and talk about more applications of PCA later.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?