<< Chapter < Page | Chapter >> Page > |
So the intuition is that even if this is a pretty sloppy controller, or even if you had a pretty bad controller come up with your original normal trajectory, you still expect maybe, right? You’d expect your state and action at time T to be maybe reasonably similar to what even the sloppy controller had done, right? So you want a fly trajectory maybe you want to make a 90-degree turn. Maybe if a bad controller that does a pretty sloppy job, but at any given in time you’re still moving around this trajectory. So this is really telling you where along, say, the 90-degree turn trajectory, just very roughly, where along the trajectory you expect to be at any given time and so let’s linearize around that point. Okay?
Then you would – having found the linear model you run LQR to get the optimal policy for this specific linear model and now you have a better policy. And the final thing you do is – boy, I’ll write this on a different board, I guess. Okay. Shoot. The last step is you use a simulator, a model, to come up with a new normal trajectory. So i.e., okay? So now you take the controller you just learned and basically try flying your helicopter in your simulator. So you initialize the simulator to the initial state, and I’ll call the S bar zero, and you’ll run every time step. You choose an action which I’ll call A bar T, using the controller pi T that you just learned using LQR. And then you simulate forward in time, right? You use the simulator, the function F, to tell you what the next state S bar T plus one will be when your previous state and action is bar T A bar T. And then you linearize around this new trajectory and repeat. Okay? So now you have a new normal trajectory and you linearize your simulator around this new trajectory and then you repeat the whole procedure. I guess going back to step two of the algorithm. And this turns out to be a surprisingly effective procedure. So the cartoon of what this algorithm may do is as follows. Let’s say you want to make a 90-degree turn on the helicopter let’s see one, you know, a helicopter to follow a trajectory like that. Follow up of a very bad controller, I just, you know, hack up some controller, whatever. Have some way to come up with an initial normal trajectory. Maybe your initial controller overshoots the turn, takes the turn wide, right? But now you can use these points to linearize the simulator. So linearize in a very non-linear simulator and the idea is that maybe this state isn’t such a bad approximation. That maybe a linearization approximation at this sequence of states will actually be reasonable because your helicopter won’t exactly be on the states, but will be close to the sequence of states of every time step. So after one duration of DDP, that’s the target trajectory, maybe you get a little bit closer and now you have an even better place around to linearize. Then after another linearization of DDP you get closer and closer to finding exactly the trajectory you want. Okay?
So turns out DDP is a sort of – it turns out to be a form of a local search algorithm in which you – on each iteration you find a slightly better place to linearize. So you end up with a slightly better control and you repeat. And we actually do this – this is actually one of the things we do on the helicopter. And this works very well on many – this works surprisingly well – this works very well on many problems. Cool. I think – I was actually going to show some helicopter videos, but in the interest of time, let me just defer that to the next lecture. I’ll show you a bunch of cool helicopter things in the next lecture, but let me just check if there are questions about this before I move on. Yeah?
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?