<< Chapter < Page | Chapter >> Page > |
Or, when we are writing rewards as a function of the states only, this becomes
For most of our development, we will use the simpler state-rewards , though the generalization to state-action rewards offers no special difficulties.
Our goal in reinforcement learning is to choose actions over time so as to maximize the expected value of the total payoff:
Note that the reward at timestep is discounted by a factor of . Thus, to make this expectation large, we would like to accrue positive rewardsas soon as possible (and postpone negative rewards as long as possible). In economic applications where is the amount of money made, also has a natural interpretation in terms of the interest rate (where a dollar today isworth more than a dollar tomorrow).
A policy is any function mapping from the states to the actions. We say that we are executing some policy if, whenever we are in state , we take action . We also define the value function for a policy according to
is simply the expected sum of discounted rewards upon starting in state , and taking actions according to . This notation in which we condition on isn't technically correct because isn't a random variable, but this is quite standard in the literature.
Given a fixed policy , its value function satisfies the Bellman equations :
This says that the expected sum of discounted rewards for starting in consists of two terms: First, the immediate reward that we get rightaway simply for starting in state , and second, the expected sum of future discounted rewards. Examining the second termin more detail, we see that the summation term above can be rewritten . This is the expected sum of discounted rewards for starting in state , where is distributed according , which is the distribution over where we will end up after taking the first action in the MDP from state . Thus, the second term above gives the expected sum of discounted rewardsobtained after the first step in the MDP.
Bellman's equations can be used to efficiently solve for . Specifically, in a finite-state MDP ( ), we can write down one such equation for for every state . This gives us a set of linear equations in variables (the unknown 's, one for each state), which can be efficiently solved for the 's.
We also define the optimal value function according to
In other words, this is the best possible expected sum of discounted rewards that can be attained using any policy. There is also a version of Bellman'sequations for the optimal value function:
The first term above is the immediate reward as before. The second term is the maximum over all actions of the expected future sum of discounted rewards we'll get upon after action . You should make sure you understand this equation and see why it makes sense.
We also define a policy as follows:
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?