<< Chapter < Page | Chapter >> Page > |
This first inequality comes from the fact that
holds for any values of and , and in particular holds for , . To get Equation [link] , we used the fact that is chosen explicitly to be
and thus this formula evaluated at must be equal to or larger than the same formula evaluated at . Finally, the step used to get [link] was shown earlier, and follows from having been chosen to make Jensen's inequality hold with equality at .
Hence, EM causes the likelihood to converge monotonically. In our description of the EM algorithm, we said we'd run it until convergence. Given the resultthat we just showed, one reasonable convergence test would be to check if the increase in between successive iterations is smaller than some tolerance parameter, and to declare convergence if EM is improving too slowly.
Remark. If we define
then we know from our previous derivation. The EM can also be viewed a coordinate ascent on , in which the E-step maximizes it with respect to (check this yourself), and the M-step maximizes it with respect to .
Armed with our general definition of the EM algorithm, let's go back to our oldexample of fitting the parameters , and in a mixture of Gaussians. For the sake of brevity, we carry outthe derivations for the M-step updates only for and , and leave the updates for as an exercise for the reader.
The E-step is easy. Following our algorithm derivation above, we simply calculate
Here, “ ” denotes the probability of taking the value under the distribution .
Next, in the M-step, we need to maximize, with respect to our parameters , the quantity
Let's maximize this with respect to . If we take the derivative with respect to , we find
Setting this to zero and solving for therefore yields the update rule
which was what we had in the previous set of notes.
Let's do one more example, and derive the M-step update for the parameters . Grouping together only the terms that depend on , we find that we need to maximize
However, there is an additional constraint that the 's sum to 1, since they represent the probabilities . To deal with the constraint that , we construct the Lagrangian
where is the Lagrange multiplier. We don't need to worry about the constraint that , because as we'll shortly see, the solution we'll find from this derivation will automatically satisfythat anyway. Taking derivatives, we find
Setting this to zero and solving, we get
i.e., . Using the constraint that , we easily find that . (This used the fact that , and since probabilities sum to 1, .) We therefore have our M-step updates for the parameters :
The derivation for the M-step updates to are also entirely straightforward.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?