<< Chapter < Page | Chapter >> Page > |
For example, one may choose to learn a linear model of the form
using an algorithm similar to linear regression. Here, the parameters of the model are the matrices and , and we can estimate them using the data collected from our trials, by picking
(This corresponds to the maximum likelihood estimate of the parameters.)
Having learned and , one option is to build a deterministic model, in which given an input and , the output is exactly determined. Specifically, we always compute according to Equation [link] . Alternatively, we may also build a stochastic model, in which is a random function of the inputs, by modelling it as
where here is a noise term, usually modeled as . (The covariance matrix can also be estimated from data in a straightforward way.)
Here, we've written the next-state as a linear function of the current state and action; but of course, non-linear functions are also possible.Specifically, one can learn a model , where and are some non-linear feature mappings of the states and actions. Alternatively, one can also use non-linear learning algorithms, such as locally weighted linearregression, to learn to estimate as a function of and . These approaches can also be used to build either deterministic or stochastic simulatorsof an MDP.
We now describe the fitted value iteration algorithm for approximating the value function of a continuous state MDP. In the sequel, we will assumethat the problem has a continuous state space , but that the action space is small and discrete. In practice, most MDPs have much smaller action spaces than state spaces. E.g., a car has a 6d state space, and a2d action space (steering and velocity controls); the inverted pendulum has a 4d state space, and a 1d action space; a helicopter has a 12d state space, and a4d action space. So, discretizing ths set of actions is usually less of a problem than discretizing the state space would have been.
Recall that in value iteration, we would like to perform the update
(In "Value iteration and policy iteration" , we had written the value iteration update with a summation rather than an integral over states; the new notation reflects that we are now working in continuous states rather than discrete states.)
The main idea of fitted value iteration is that we are going to approximately carry out this step, over a finite sampleof states . Specifically, we will use a supervised learning algorithm—linear regression in our description below—to approximate the value functionas a linear or non-linear function of the states:
Here, is some appropriate feature mapping of the states.
For each state in our finite sample of states, fitted value iteration will first compute a quantity , which will be our approximation to (the right hand side of Equation [link] ). Then, it will apply a supervised learning algorithm to try to get close to (or, in other words, to try to get close to ).
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?