<< Chapter < Page | Chapter >> Page > |
So I showed this last time. I won’t go into details today, so I said last time that you can actually solve for Vp by solving a linear system of equations. There was a form of Bellman’s equations for Vp, and it turned out to be, if you write this out, you end up with a linear system of 11 equations of 11 unknowns and so you can actually solve for the value function for a fixed policy by solving like a system of linear equations with 11 variables and 11 constraints, and so that’s policy iteration; whereas, in value iteration, going back on board, in value iteration you sort of repeatedly perform this update where you update the value of a state as the [inaudible]. So I hope that makes sense that the algorithm of these is different.
Student: [Inaudible] on the atomic kits so is the assumption that we can never get out of those states?
Instructor (Andrew Ng) :Yes. There’s always things that you where you solve for this [inaudible], for example, and make the numbers come up nicely, but I don’t wanna spend too much time on them, but yeah, so the assumption is that once you enter the absorbing state, then the world ends or there’re no more rewards after that and you can think of another way to think of the absorbing states which is sort of mathematically equivalent. You can think of the absorbing states as transitioning with probability 1 to sum 12 state, and then once you’re in that 12th state, you always remain in that 12th state, and there’re no further rewards from there. If you want, you can think of this as actually an MDP with 12 states rather than 11 states, and the 12th state is this zero cost absorbing state that you get stuck in forever. Other questions? Yeah, please go.
Student: Where did the Bellman’s equations [inaudible] to optimal value [inaudible]?
Instructor (Andrew Ng) :Boy, yeah. Okay, this Bellman’s equations, this equation that I’m pointing to, I sort of tried to give it justification for this last time. I’ll say it in one sentence so that’s that the expected total payoff I get, I expect to get something from the state as is equal to my immediate reward which is the reward I get for starting a state. Let’s see. If I sum the state, I’m gonna get some first reward and then I can transition to some other state, and then from that other state, I’ll get some additional rewards from then. So Bellman’s equations breaks that sum into two pieces. It says the value of a state is equal to the reward you get right away is really, well. V*(s) is really equal to +G, so this is V*(s) is, and so Bellman’s equations sort of breaks V* into two terms and says that there’s this first term which is the immediate reward, that, and then +G(the rewards you get in the future) which it turns out to be equal to that second row.
I spent more time justifying this in the previous lecture, although yeah, hopefully, for the purposes of this lecture, if you’re not sure where this is came, if you don’t remember the justification of that, why don’t you just maybe take my word for that this equation holds true since I use it a little bit later as well, and then the lecture notes sort of explain a little further the justification for why this equation might hold true. But for now, yeah, just for now take my word for it that this holds true ‘cause we’ll use it a little bit later today as well.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?