<< Chapter < Page | Chapter >> Page > |
Note that gives the action that attains the maximum in the “max” in Equation [link] .
It is a fact that for every state and every policy , we have
The first equality says that the , the value function for , is equal to the optimal value function for every state . Further, the inequality above says that 's value is at least a large as thevalue of any other other policy. In other words, as defined in Equation [link] is the optimal policy.
Note that has the interesting property that it is the optimal policy for all states . Specifically, it is not the case that if we were starting in some state then there'd be some optimal policy for that state, and if we were starting in some other state then there'd be some other policy that's optimal policy for . Specifically, the same policy attains the maximum in Equation [link] for all states . This means that we can use the same policy no matter what the initial state of our MDP is.
We now describe two efficient algorithms for solving finite-state MDPs. For now, we will consider only MDPs with finite state and action spaces ( ).
The first algorithm, value iteration , is as follows:
This algorithm can be thought of as repeatedly trying to update the estimated value function using Bellman Equations [link] .
There are two possible ways of performing the updates in the inner loop of the algorithm. In the first, we can first compute the new values for for every state , and then overwrite all the old values with the new values. This is called a synchronous update. In this case, the algorithm can be viewed as implementing a “Bellman backup operator” that takes a current estimateof the value function, and maps it to a new estimate. (See homework problem for details.) Alternatively, we can also perform asynchronous updates. Here, we would loop over the states (in some order), updating the values one ata time.
Under either synchronous or asynchronous updates, it can be shown that value iteration will cause to converge to . Having found , we can then use Equation [link] to find the optimal policy.
Apart from value iteration, there is a second standard algorithm for finding an optimal policy for an MDP. The policy iteration algorithm proceeds as follows:
Thus, the inner-loop repeatedly computes the value function for the current policy, and then updates the policy using the current value function. (The policy found in step (b) is also called the policy that is greedy with respect to .) Note that step (a) can be done via solving Bellman's equations as described earlier, which in the case of a fixed policy, is just a setof linear equations in variables.
After at most a finite number of iterations of this algorithm, will converge to , and will converge to .
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?