<< Chapter < Page Chapter >> Page >

MachineLearning-Lecture17

Instructor (Andrew Ng) :Okay, good morning. Welcome back. So I hope all of you had a good Thanksgiving break. After the problem sets, I suspect many of us needed one. Just one quick announcement so as I announced by email a few days ago, this afternoon we’ll be doing another tape ahead of lecture, so I won’t physically be here on Wednesday, and so we’ll be taping this Wednesday’s lecture ahead of time. If you’re free this afternoon, please come to that; it’ll be at 3:45 p.m. in the Skilling Auditorium in Skilling 193 at 3:45. But of course, you can also just show up in class as usual at the usual time or just watch it online as usual also.

Okay, welcome back. What I want to do today is continue our discussion on Reinforcement Learning in MDPs. Quite a long topic for me to go over today, so most of today’s lecture will be on continuous state MDPs, and in particular, algorithms for solving continuous state MDPs, so I’ll talk just very briefly about discretization. I’ll spend a lot of time talking about models, assimilators of MDPs, and then talk about one algorithm called fitted value iteration and two functions which builds on that, and then hopefully, I’ll have time to get to a second algorithm called, approximate policy iteration

Just to recap, right, in the previous lecture, I defined the Reinforcement Learning problem and I defined MDPs, so let me just recap the notation. I said that an MDP or a Markov Decision Process, was a ? tuple, comprising those things and the running example of those using last time was this one right, adapted from the Russell and Norvig AI textbook. So in this example MDP that I was using, it had 11 states, so that’s where S was. The actions were compass directions: north, south, east and west.

The state transition probability is to capture chance of your transitioning to every state when you take any action in any other given state and so in our example that captured the stochastic dynamics of our robot wondering around [inaudible], and we said if you take the action north and the south, you have a .8 chance of actually going north and .1 chance of veering off, so that .1 chance of veering off to the right so said model of the robot’s noisy dynamic with a [inaudible]and the reward function was that +/-1 at the absorbing states and -0.02 elsewhere. This is an example of an MDP, and that’s what these five things were. Oh, and I used a discount factor G of usually a number slightly less than one, so that’s the 0.99. And so our goal was to find the policy, the control policy and that’s at ?, which is a function mapping from the states of the actions that tells us what action to take in every state, and our goal was to find a policy that maximizes the expected value of our total payoff. So we want to find a policy. Well, let’s see. We define value functions Vp (s) to be equal to this. We said that the value of a policy ? from State S was given by the expected value of the sum of discounted rewards, conditioned on your executing the policy ? and you’re stating off your [inaudible] to say in the State S, and so our strategy for finding the policy was sort of comprised of two steps. So the goal is to find a good policy that maximizes the suspected value of the sum of discounted rewards, and so I said last time that one strategy for finding the [inaudible]of a policy is to first compute the optimal value function which I denoted V*(s) and is defined like that. It’s the maximum value that any policy can obtain, and for example, the optimal value function for that MDP looks like this. So in other words, starting from any of these states, what’s the expected value of the sum of discounted rewards you get, so this is V*. We also said that once you’ve found V*, you can compute the optimal policy using this.

Questions & Answers

why we learn economics ? Explain briefly
ayalew Reply
why we learn economics ?
ayalew
why we learn economics
ayalew
profit maximize for monopolistically?
Usman Reply
what kind of demand curve under monopoly?
Mik Reply
what is the difference between inflation and scarcity ?
Abdu Reply
What stops oligopolists from acting together as a monopolist and earning the highest possible level of profits?
Mik
why economics is difficult for 2nd school students.
Siraj Reply
what does mean opportunity cost?
Aster Reply
what is poetive effect of population growth
Solomon Reply
what is inflation
Nasir Reply
what is demand
Eleni
what is economics
IMLAN Reply
economics theory describes individual behavior as the result of a process of optimization under constraints the objective to be reached being determined by
Kalkidan
Economics is a branch of social science that deal with How to wise use of resource ,s
Kassie
need
WARKISA
Economic Needs: In economics, needs are goods or services that are necessary for maintaining a certain standard of living. This includes things like healthcare, education, and transportation.
Kalkidan
What is demand and supply
EMPEROR Reply
deman means?
Alex
what is supply?
Alex
ex play supply?
Alex
Money market is a branch or segment of financial market where short-term debt instruments are traded upon. The instruments in this market includes Treasury bills, Bonds, Commercial Papers, Call money among other.
murana Reply
good
Kayode
what is money market
umar Reply
Examine the distinction between theory of comparative cost Advantage and theory of factor proportion
Fatima Reply
What is inflation
Bright Reply
a general and ongoing rise in the level of prices in an economy
AI-Robot
What are the factors that affect demand for a commodity
Florence Reply
price
Kenu
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask