<< Chapter < Page Chapter >> Page >

MachineLearning-Lecture17

Instructor (Andrew Ng) :Okay, good morning. Welcome back. So I hope all of you had a good Thanksgiving break. After the problem sets, I suspect many of us needed one. Just one quick announcement so as I announced by email a few days ago, this afternoon we’ll be doing another tape ahead of lecture, so I won’t physically be here on Wednesday, and so we’ll be taping this Wednesday’s lecture ahead of time. If you’re free this afternoon, please come to that; it’ll be at 3:45 p.m. in the Skilling Auditorium in Skilling 193 at 3:45. But of course, you can also just show up in class as usual at the usual time or just watch it online as usual also.

Okay, welcome back. What I want to do today is continue our discussion on Reinforcement Learning in MDPs. Quite a long topic for me to go over today, so most of today’s lecture will be on continuous state MDPs, and in particular, algorithms for solving continuous state MDPs, so I’ll talk just very briefly about discretization. I’ll spend a lot of time talking about models, assimilators of MDPs, and then talk about one algorithm called fitted value iteration and two functions which builds on that, and then hopefully, I’ll have time to get to a second algorithm called, approximate policy iteration

Just to recap, right, in the previous lecture, I defined the Reinforcement Learning problem and I defined MDPs, so let me just recap the notation. I said that an MDP or a Markov Decision Process, was a ? tuple, comprising those things and the running example of those using last time was this one right, adapted from the Russell and Norvig AI textbook. So in this example MDP that I was using, it had 11 states, so that’s where S was. The actions were compass directions: north, south, east and west.

The state transition probability is to capture chance of your transitioning to every state when you take any action in any other given state and so in our example that captured the stochastic dynamics of our robot wondering around [inaudible], and we said if you take the action north and the south, you have a .8 chance of actually going north and .1 chance of veering off, so that .1 chance of veering off to the right so said model of the robot’s noisy dynamic with a [inaudible]and the reward function was that +/-1 at the absorbing states and -0.02 elsewhere. This is an example of an MDP, and that’s what these five things were. Oh, and I used a discount factor G of usually a number slightly less than one, so that’s the 0.99. And so our goal was to find the policy, the control policy and that’s at ?, which is a function mapping from the states of the actions that tells us what action to take in every state, and our goal was to find a policy that maximizes the expected value of our total payoff. So we want to find a policy. Well, let’s see. We define value functions Vp (s) to be equal to this. We said that the value of a policy ? from State S was given by the expected value of the sum of discounted rewards, conditioned on your executing the policy ? and you’re stating off your [inaudible] to say in the State S, and so our strategy for finding the policy was sort of comprised of two steps. So the goal is to find a good policy that maximizes the suspected value of the sum of discounted rewards, and so I said last time that one strategy for finding the [inaudible]of a policy is to first compute the optimal value function which I denoted V*(s) and is defined like that. It’s the maximum value that any policy can obtain, and for example, the optimal value function for that MDP looks like this. So in other words, starting from any of these states, what’s the expected value of the sum of discounted rewards you get, so this is V*. We also said that once you’ve found V*, you can compute the optimal policy using this.

Questions & Answers

what does the ideal gas law states
Joy Reply
Three charges q_{1}=+3\mu C, q_{2}=+6\mu C and q_{3}=+8\mu C are located at (2,0)m (0,0)m and (0,3) coordinates respectively. Find the magnitude and direction acted upon q_{2} by the two other charges.Draw the correct graphical illustration of the problem above showing the direction of all forces.
Kate Reply
To solve this problem, we need to first find the net force acting on charge q_{2}. The magnitude of the force exerted by q_{1} on q_{2} is given by F=\frac{kq_{1}q_{2}}{r^{2}} where k is the Coulomb constant, q_{1} and q_{2} are the charges of the particles, and r is the distance between them.
Muhammed
What is the direction and net electric force on q_{1}= 5µC located at (0,4)r due to charges q_{2}=7mu located at (0,0)m and q_{3}=3\mu C located at (4,0)m?
Kate Reply
what is the change in momentum of a body?
Eunice Reply
what is a capacitor?
Raymond Reply
Capacitor is a separation of opposite charges using an insulator of very small dimension between them. Capacitor is used for allowing an AC (alternating current) to pass while a DC (direct current) is blocked.
Gautam
A motor travelling at 72km/m on sighting a stop sign applying the breaks such that under constant deaccelerate in the meters of 50 metres what is the magnitude of the accelerate
Maria Reply
please solve
Sharon
8m/s²
Aishat
What is Thermodynamics
Muordit
velocity can be 72 km/h in question. 72 km/h=20 m/s, v^2=2.a.x , 20^2=2.a.50, a=4 m/s^2.
Mehmet
A boat travels due east at a speed of 40meter per seconds across a river flowing due south at 30meter per seconds. what is the resultant speed of the boat
Saheed Reply
50 m/s due south east
Someone
which has a higher temperature, 1cup of boiling water or 1teapot of boiling water which can transfer more heat 1cup of boiling water or 1 teapot of boiling water explain your . answer
Ramon Reply
I believe temperature being an intensive property does not change for any amount of boiling water whereas heat being an extensive property changes with amount/size of the system.
Someone
Scratch that
Someone
temperature for any amount of water to boil at ntp is 100⁰C (it is a state function and and intensive property) and it depends both will give same amount of heat because the surface available for heat transfer is greater in case of the kettle as well as the heat stored in it but if you talk.....
Someone
about the amount of heat stored in the system then in that case since the mass of water in the kettle is greater so more energy is required to raise the temperature b/c more molecules of water are present in the kettle
Someone
definitely of physics
Haryormhidey Reply
how many start and codon
Esrael Reply
what is field
Felix Reply
physics, biology and chemistry this is my Field
ALIYU
field is a region of space under the influence of some physical properties
Collete
what is ogarnic chemistry
WISDOM Reply
determine the slope giving that 3y+ 2x-14=0
WISDOM
Another formula for Acceleration
Belty Reply
a=v/t. a=f/m a
IHUMA
innocent
Adah
pratica A on solution of hydro chloric acid,B is a solution containing 0.5000 mole ofsodium chlorid per dm³,put A in the burret and titrate 20.00 or 25.00cm³ portion of B using melting orange as the indicator. record the deside of your burret tabulate the burret reading and calculate the average volume of acid used?
Nassze Reply
how do lnternal energy measures
Esrael
Two bodies attract each other electrically. Do they both have to be charged? Answer the same question if the bodies repel one another.
JALLAH Reply
No. According to Isac Newtons law. this two bodies maybe you and the wall beside you. Attracting depends on the mass och each body and distance between them.
Dlovan
Are you really asking if two bodies have to be charged to be influenced by Coulombs Law?
Robert
like charges repel while unlike charges atttact
Raymond
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask