<< Chapter < Page | Chapter >> Page > |
So using the product rule, when I take the derivative of this function with respect to theta what I get is – we’ll end up with the sum of terms right there. There are a lot of terms here that depend on theta, and so what I’ll end up with is I’ll end up having a sum – having one term that corresponds to the derivative of this keeping everything else fixed, to have one term from the derivative of this keeping everything else fixed, and I’ll have one term from the derivative of that last thing keeping everything else fixed. So just apply the product rule to this.
Let’s write that down. So I have that – the derivative with respect to theta of the expected value of the payoff is – it turns out I can actually do this entire derivation in exactly four steps, but each of the steps requires a huge amount of writing, so I’ll just start writing and see how that goes, but this is a four step derivation. So there’s the sum over the state action sequences as we saw before. Close the bracket, and then times the payoff. So that huge amount of writing, that was just taking my previous formula and differentiating these terms that depend on theta one at a time. This was the term with the derivative of the first pi of theta S0 A0. So there’s the first derivative term. There’s the second one. Then you have plus dot, dot, dot, like in terms of [inaudible]. That’s my last term. So that was step one of four. And so by algebra – let me just write this down and convince us all that it’s true. This is the second of four steps in which it just convinced itself that if I expand out – take the sum and multiply it by that big product in front, then I get back that sum of terms I get. It’s essentially – for example, when I multiply out, this product on top of this ratio, of this first fraction, then pi subscript theta S0 A0, that would cancel out this pi subscript theta S0 A0 and replace it with the derivative with respect to theta of pi theta S0 A0. So [inaudible]algebra was the second.
But that term on top is just what I worked out previously – was the joint probability of the state action sequence, and now I have that times that times the payoff. And so by the definition of expectation, this is just equal to that thing times the payoff. So this thing inside the expectation, this is exactly the step that we were taking in the inner group of our reinforce algorithm, roughly the reinforce algorithm. This proves that the expected value of our change to theta is exactly in the direction of the gradient of our expected payoff. That’s how I started this whole derivation. I said let’s look at our expected payoff and take the derivative of that with respect to theta. What we’ve proved is that on expectation, the step direction I’ll take reinforce is exactly the gradient of the thing I’m trying to optimize. This shows that this algorithm is a stochastic gradient ascent algorithm.
I wrote a lot. Why don’t you take a minute to look at the equations and [inaudible] check if everything makes sense. I’ll erase a couple of boards and then check if you have questions after that. Questions? Could you raise your hand if this makes sense? Great. Some of the comments – we talked about those value function approximation approaches where you approximate V star, then you go from V star to pi star. Then here was also policy search approaches, where you try to approximate the policy directly. So let’s talk briefly about when either one may be preferable.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?