<< Chapter < Page | Chapter >> Page > |
That turns out to be optimal essentially for only that special case of a POMDP. In the more general case, that strategy of designing a controller assuming full observability and then just estimating the state and plugging the two together, for general POMDPs that same strategy is often a very reasonable strategy but is not always guaranteed to be optimal. Solving these problems in general, NP-hard. So what I want to do today is actually talk about a different class of reinforcement learning algorithms. These are called policy search algorithms. In particular, policy search algorithms can be applied equally well to MDPs, to fully observed Markov decision processes, or to these POMDPs, or to these partially observable MPDs. What I want to do now, I’ll actually just describe policy search algorithms applied to MDPs, applied to the fully observable case. And in the end, I just briefly describe how you can take policy search algorithms and apply them to POMDPs. In the latter case, when you apply a policy search algorithm to a POMDP, it’s going to be hard to guarantee that you get the globally optimal policy because solving POMDPs in general is NP-hard, but nonetheless policy search algorithms – it turns out to be I think one of the most effective classes of reinforcement learning algorithms, as well both for MDPs and for POMDPs.
So here’s what we’re going to do. In policy search, we’re going to define of some set which I denote capital pi of policies, and our strategy is to search for a good policy lower pi into set capital pi. Just by analogy, I want to say – in the same way, back when we were talking about supervised learning, the way we defined the set capital pi of policies in the search for policy in this set capital pi is analogous to supervised learning where we defined a set script H of hypotheses and search – and would search for a good hypothesis in this policy script H. Policy search is sometimes also called direct policy search. To contrast this with the source of algorithms we’ve been talking about so far, in all the algorithms we’ve been talking about so far, we would try to find V star. We would try to find the optimal value function. And then we’d use V star – we’d use the optimal value function to then try to compute or try to approximate pi star. So all the approaches we talked about previously are strategy for finding a good policy. Once we compute the value function, then we go from that to policy. In contrast, in policy search algorithms and something that’s called direct policy search algorithms, the idea is that we’re going to quote “directly” try to approximate a good policy without going through the intermediate stage of trying to find the value function. Let’s see. And also as I develop policy search – just one step that’s sometimes slightly confusing. Making an analogy to supervised learning again, when we talked about logistic regression, I said we have input features X and some labels Y, and I sort of said let’s approximate Y using the logistic function of the inputs X. And at least initially, the logistic function was sort of pulled out of the air.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?