<< Chapter < Page | Chapter >> Page > |
Both value iteration and policy iteration are standard algorithms for solving MDPs, and there isn't currently universal agreement over which algorithm is better.For small MDPs, policy iteration is often very fast and converges with very few iterations. However, for MDPs with largestate spaces, solving for explicitly would involve solving a large system of linear equations, and could be difficult. In these problems,value iteration may be preferred. For this reason, in practice value iteration seems to be used more often than policy iteration.
So far, we have discussed MDPs and algorithms for MDPs assuming that the state transition probabilities and rewards are known. In many realistic problems,we are not given state transition probabilities and rewards explicitly, but must instead estimate them from data. (Usually, and are known.)
For example, suppose that, for the inverted pendulum problem (see problem set 4), we had a number of trials in the MDP, that proceeded as follows:
Here, is the state we were at time of trial , and is the corresponding action that was taken from that state. In practice, each of the trials above might be run until the MDP terminates(such as if the pole falls over in the inverted pendulum problem), or it might be run for some large but finite number of timesteps.
Given this “experience” in the MDP consisting of a number of trials, we can then easily derive the maximum likelihood estimates for the statetransition probabilities:
Or, if the ratio above is “0/0”—corresponding to the case of never having taken action in state before—the we might simply estimate to be . (I.e., estimate to be the uniform distribution over all states.)
Note that, if we gain more experience (observe more trials) in the MDP, there is an efficient way to update our estimated state transition probabilities usingthe new experience. Specifically, if we keep around the counts for both the numerator anddenominator terms of [link] , then as we observe more trials, we can simply keep accumulating those counts. Computing the ratio of these countsthen given our estimate of .
Using a similar procedure, if is unknown, we can also pick our estimate of the expected immediate reward in state to be the average reward observed in state .
Having learned a model for the MDP, we can then use either value iteration or policy iteration to solve the MDP using the estimated transition probabilitiesand rewards. For example, putting together model learning and value iteration, here is one possible algorithm for learning in an MDP with unknown state transitionprobabilities:
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?