<< Chapter < Page Chapter >> Page >

Instructor (Andrew Ng) :Okay. Good morning. Just one quick announcement before I start. Poster session, next Wednesday, 8:30 as you already know, and poster boards will be made available soon, so the poster boards we have are 20 inches by 30 inches in case you want to start designing your posters. That’s 20 inches by 30 inches. And they will be available this Friday, and you can pick them up from Nicki Salgudo who’s in Gates 187, so starting this Friday. I’ll send out this information by e-mail as well, in case you don’t want to write it down.

For those you that are SCPD students, if you want to show up here only on Wednesday for the poster session itself, we’ll also have blank posters there, or you’re also welcome to buy your own poster boards. If you do take poster boards from us then please treat them well. For the sake of the environment, we do ask you to give them back at the end of the poster session. We’ll recycle them from year to year. So if you do take one from us, please don’t cut holes in it or anything. So welcome to the last lecture of this course. What I want to do today is tell you about one final class of reinforcement learning algorithms. I just want to say a little bit about POMDPs, the partially observable MDPs, and then the main technical topic for today will be policy search algorithms. I’ll talk about two specific algorithms, essentially called reinforced and called Pegasus, and then we’ll wrap up the class. So if you recall from the last lecture, I actually started to talk about one specific example of a POMDP, which was this sort of linear dynamical system. This is sort of LQR, linear quadratic revelation problem, but I changed it and said what if we only have observations YT. And what if we couldn’t observe the state of the system directly, but had to choose an action only based on some noisy observations that maybe some function of the state?

So our strategy last time was that we said that in the fully observable case, we could choose actions – AT equals two, that matrix LT times ST. So LT was this matrix of parameters that [inaudible] describe the dynamic programming algorithm for finite horizon MDPs in the LQR problem. And so we said if only we knew what the state was, we choose actions according to some matrix LT times the state. And then I said in the partially observable case, we would compute these estimates. I wrote them as S of T given T, which were our best estimate for what the state is given all the observations. And in particular, I’m gonna talk about a Kalman filter which we worked out that our posterior distribution of what the state is given all the observations up to a certain time that was this.

So this is from last time. So that given the observations Y one through YT, our posterior distribution of the current state ST was Gaussian would mean ST given T sigma T given T. So I said we use a Kalman filter to compute this thing, this ST given T, which is going to be our best guess for what the state is currently. And then we choose actions using our estimate for what the state is, rather than using the true state because we don’t know the true state anymore in this POMDP. So it turns out that this specific strategy actually allows you to choose optimal actions, allows you to choose actions as well as you possibly can given that this is a POMDP, and given there are these noisy observations. It turns out that in general finding optimal policies with POMDPs – finding optimal policies for these sorts of partially observable MDPs is an NP-hard problem. Just to be concrete about the formalism of the POMDP – I should just write it here – a POMDP formally is a tuple like that where the changes are the set Y is the set of possible observations, and this O subscript S are the observation distributions. And so at each step, we observe – at each step in the POMDP, if we’re in some state ST, we observe some observation YT drawn from the observation distribution O subscript ST, that there’s an index by what the current state is. And it turns out that computing the optimal policy in a POMDP is an NP-hard problem. For the specific case of linear dynamical systems with the Kalman filter model, we have this strategy of computing the optimal policy assuming full observability and then estimating the states from the observations, and then plugging the two together.

Questions & Answers

what does the ideal gas law states
Joy Reply
Three charges q_{1}=+3\mu C, q_{2}=+6\mu C and q_{3}=+8\mu C are located at (2,0)m (0,0)m and (0,3) coordinates respectively. Find the magnitude and direction acted upon q_{2} by the two other charges.Draw the correct graphical illustration of the problem above showing the direction of all forces.
Kate Reply
To solve this problem, we need to first find the net force acting on charge q_{2}. The magnitude of the force exerted by q_{1} on q_{2} is given by F=\frac{kq_{1}q_{2}}{r^{2}} where k is the Coulomb constant, q_{1} and q_{2} are the charges of the particles, and r is the distance between them.
Muhammed
What is the direction and net electric force on q_{1}= 5µC located at (0,4)r due to charges q_{2}=7mu located at (0,0)m and q_{3}=3\mu C located at (4,0)m?
Kate Reply
what is the change in momentum of a body?
Eunice Reply
what is a capacitor?
Raymond Reply
Capacitor is a separation of opposite charges using an insulator of very small dimension between them. Capacitor is used for allowing an AC (alternating current) to pass while a DC (direct current) is blocked.
Gautam
A motor travelling at 72km/m on sighting a stop sign applying the breaks such that under constant deaccelerate in the meters of 50 metres what is the magnitude of the accelerate
Maria Reply
please solve
Sharon
8m/s²
Aishat
What is Thermodynamics
Muordit
velocity can be 72 km/h in question. 72 km/h=20 m/s, v^2=2.a.x , 20^2=2.a.50, a=4 m/s^2.
Mehmet
A boat travels due east at a speed of 40meter per seconds across a river flowing due south at 30meter per seconds. what is the resultant speed of the boat
Saheed Reply
50 m/s due south east
Someone
which has a higher temperature, 1cup of boiling water or 1teapot of boiling water which can transfer more heat 1cup of boiling water or 1 teapot of boiling water explain your . answer
Ramon Reply
I believe temperature being an intensive property does not change for any amount of boiling water whereas heat being an extensive property changes with amount/size of the system.
Someone
Scratch that
Someone
temperature for any amount of water to boil at ntp is 100⁰C (it is a state function and and intensive property) and it depends both will give same amount of heat because the surface available for heat transfer is greater in case of the kettle as well as the heat stored in it but if you talk.....
Someone
about the amount of heat stored in the system then in that case since the mass of water in the kettle is greater so more energy is required to raise the temperature b/c more molecules of water are present in the kettle
Someone
definitely of physics
Haryormhidey Reply
how many start and codon
Esrael Reply
what is field
Felix Reply
physics, biology and chemistry this is my Field
ALIYU
field is a region of space under the influence of some physical properties
Collete
what is ogarnic chemistry
WISDOM Reply
determine the slope giving that 3y+ 2x-14=0
WISDOM
Another formula for Acceleration
Belty Reply
a=v/t. a=f/m a
IHUMA
innocent
Adah
pratica A on solution of hydro chloric acid,B is a solution containing 0.5000 mole ofsodium chlorid per dm³,put A in the burret and titrate 20.00 or 25.00cm³ portion of B using melting orange as the indicator. record the deside of your burret tabulate the burret reading and calculate the average volume of acid used?
Nassze Reply
how do lnternal energy measures
Esrael
Two bodies attract each other electrically. Do they both have to be charged? Answer the same question if the bodies repel one another.
JALLAH Reply
No. According to Isac Newtons law. this two bodies maybe you and the wall beside you. Attracting depends on the mass och each body and distance between them.
Dlovan
Are you really asking if two bodies have to be charged to be influenced by Coulombs Law?
Robert
like charges repel while unlike charges atttact
Raymond
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask